00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3987 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3582 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.045 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.045 The recommended git tool is: git 00:00:00.045 using credential 00000000-0000-0000-0000-000000000002 00:00:00.047 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.077 Fetching changes from the remote Git repository 00:00:00.079 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.119 Using shallow fetch with depth 1 00:00:00.119 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.119 > git --version # timeout=10 00:00:00.159 > git --version # 'git version 2.39.2' 00:00:00.159 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.186 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.186 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.880 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.896 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.909 Checking out Revision 58e4f482292076ec19d68e6712473e60ef956aed (FETCH_HEAD) 00:00:05.909 > git config core.sparsecheckout # timeout=10 00:00:05.921 > git read-tree -mu HEAD # timeout=10 00:00:05.937 > git checkout -f 58e4f482292076ec19d68e6712473e60ef956aed # timeout=5 00:00:05.962 Commit message: "packer: Fix typo in a package name" 00:00:05.962 > git rev-list --no-walk 58e4f482292076ec19d68e6712473e60ef956aed # timeout=10 00:00:06.093 [Pipeline] Start of Pipeline 00:00:06.109 [Pipeline] library 00:00:06.111 Loading library shm_lib@master 00:00:06.111 Library shm_lib@master is cached. Copying from home. 00:00:06.130 [Pipeline] node 00:00:06.140 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.141 [Pipeline] { 00:00:06.152 [Pipeline] catchError 00:00:06.153 [Pipeline] { 00:00:06.164 [Pipeline] wrap 00:00:06.170 [Pipeline] { 00:00:06.180 [Pipeline] stage 00:00:06.182 [Pipeline] { (Prologue) 00:00:06.367 [Pipeline] sh 00:00:06.653 + logger -p user.info -t JENKINS-CI 00:00:06.675 [Pipeline] echo 00:00:06.676 Node: GP11 00:00:06.686 [Pipeline] sh 00:00:06.992 [Pipeline] setCustomBuildProperty 00:00:07.004 [Pipeline] echo 00:00:07.005 Cleanup processes 00:00:07.009 [Pipeline] sh 00:00:07.295 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.295 2084928 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.310 [Pipeline] sh 00:00:07.596 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.596 ++ grep -v 'sudo pgrep' 00:00:07.596 ++ awk '{print $1}' 00:00:07.596 + sudo kill -9 00:00:07.596 + true 00:00:07.613 [Pipeline] cleanWs 00:00:07.624 [WS-CLEANUP] Deleting project workspace... 00:00:07.624 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.631 [WS-CLEANUP] done 00:00:07.635 [Pipeline] setCustomBuildProperty 00:00:07.650 [Pipeline] sh 00:00:07.934 + sudo git config --global --replace-all safe.directory '*' 00:00:08.146 [Pipeline] httpRequest 00:00:08.581 [Pipeline] echo 00:00:08.583 Sorcerer 10.211.164.101 is alive 00:00:08.594 [Pipeline] retry 00:00:08.596 [Pipeline] { 00:00:08.611 [Pipeline] httpRequest 00:00:08.616 HttpMethod: GET 00:00:08.616 URL: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:08.617 Sending request to url: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:08.640 Response Code: HTTP/1.1 200 OK 00:00:08.641 Success: Status code 200 is in the accepted range: 200,404 00:00:08.641 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:20.966 [Pipeline] } 00:00:20.985 [Pipeline] // retry 00:00:20.993 [Pipeline] sh 00:00:21.281 + tar --no-same-owner -xf jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:21.300 [Pipeline] httpRequest 00:00:21.702 [Pipeline] echo 00:00:21.704 Sorcerer 10.211.164.101 is alive 00:00:21.713 [Pipeline] retry 00:00:21.716 [Pipeline] { 00:00:21.729 [Pipeline] httpRequest 00:00:21.734 HttpMethod: GET 00:00:21.735 URL: http://10.211.164.101/packages/spdk_169c3cd047cec29b3b1e206c9259a77f3e6a8077.tar.gz 00:00:21.735 Sending request to url: http://10.211.164.101/packages/spdk_169c3cd047cec29b3b1e206c9259a77f3e6a8077.tar.gz 00:00:21.737 Response Code: HTTP/1.1 200 OK 00:00:21.738 Success: Status code 200 is in the accepted range: 200,404 00:00:21.738 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_169c3cd047cec29b3b1e206c9259a77f3e6a8077.tar.gz 00:00:43.203 [Pipeline] } 00:00:43.226 [Pipeline] // retry 00:00:43.235 [Pipeline] sh 00:00:43.526 + tar --no-same-owner -xf spdk_169c3cd047cec29b3b1e206c9259a77f3e6a8077.tar.gz 00:00:46.843 [Pipeline] sh 00:00:47.134 + git -C spdk log --oneline -n5 00:00:47.134 169c3cd04 thread: set SPDK_CONFIG_MAX_NUMA_NODES to 1 if not defined 00:00:47.134 cab1decc1 thread: add NUMA node support to spdk_iobuf_put() 00:00:47.134 40c9acf6d env: add spdk_mem_get_numa_id 00:00:47.134 0f99ab2fa thread: allocate iobuf memory based on numa_id 00:00:47.134 2ef611c19 thread: update all iobuf non-get/put functions for multiple NUMA nodes 00:00:47.158 [Pipeline] withCredentials 00:00:47.170 > git --version # timeout=10 00:00:47.184 > git --version # 'git version 2.39.2' 00:00:47.203 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:47.206 [Pipeline] { 00:00:47.215 [Pipeline] retry 00:00:47.217 [Pipeline] { 00:00:47.235 [Pipeline] sh 00:00:47.525 + git ls-remote http://dpdk.org/git/dpdk main 00:00:47.539 [Pipeline] } 00:00:47.559 [Pipeline] // retry 00:00:47.566 [Pipeline] } 00:00:47.584 [Pipeline] // withCredentials 00:00:47.596 [Pipeline] httpRequest 00:00:48.009 [Pipeline] echo 00:00:48.012 Sorcerer 10.211.164.101 is alive 00:00:48.023 [Pipeline] retry 00:00:48.026 [Pipeline] { 00:00:48.044 [Pipeline] httpRequest 00:00:48.049 HttpMethod: GET 00:00:48.049 URL: http://10.211.164.101/packages/dpdk_6dad0bb5c8621644beca86ff5f4910a943ba604d.tar.gz 00:00:48.050 Sending request to url: http://10.211.164.101/packages/dpdk_6dad0bb5c8621644beca86ff5f4910a943ba604d.tar.gz 00:00:48.064 Response Code: HTTP/1.1 200 OK 00:00:48.064 Success: Status code 200 is in the accepted range: 200,404 00:00:48.065 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_6dad0bb5c8621644beca86ff5f4910a943ba604d.tar.gz 00:01:16.042 [Pipeline] } 00:01:16.062 [Pipeline] // retry 00:01:16.071 [Pipeline] sh 00:01:16.357 + tar --no-same-owner -xf dpdk_6dad0bb5c8621644beca86ff5f4910a943ba604d.tar.gz 00:01:18.273 [Pipeline] sh 00:01:18.562 + git -C dpdk log --oneline -n5 00:01:18.562 6dad0bb5c8 event/cnxk: fix getwork write data on reconfig 00:01:18.562 b74f298f9b test/event: fix device stop 00:01:18.562 34e3ad3a1e eventdev: remove single event enqueue and dequeue 00:01:18.562 5079ede71e event/skeleton: remove single event enqueue and dequeue 00:01:18.562 a83fc0f4e1 event/cnxk: remove single event enqueue and dequeue 00:01:18.574 [Pipeline] } 00:01:18.588 [Pipeline] // stage 00:01:18.596 [Pipeline] stage 00:01:18.599 [Pipeline] { (Prepare) 00:01:18.618 [Pipeline] writeFile 00:01:18.633 [Pipeline] sh 00:01:18.919 + logger -p user.info -t JENKINS-CI 00:01:18.934 [Pipeline] sh 00:01:19.224 + logger -p user.info -t JENKINS-CI 00:01:19.250 [Pipeline] sh 00:01:19.546 + cat autorun-spdk.conf 00:01:19.546 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.546 SPDK_TEST_NVMF=1 00:01:19.546 SPDK_TEST_NVME_CLI=1 00:01:19.546 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.546 SPDK_TEST_NVMF_NICS=e810 00:01:19.546 SPDK_TEST_VFIOUSER=1 00:01:19.546 SPDK_RUN_UBSAN=1 00:01:19.546 NET_TYPE=phy 00:01:19.546 SPDK_TEST_NATIVE_DPDK=main 00:01:19.546 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:19.553 RUN_NIGHTLY=1 00:01:19.558 [Pipeline] readFile 00:01:19.579 [Pipeline] withEnv 00:01:19.581 [Pipeline] { 00:01:19.594 [Pipeline] sh 00:01:19.885 + set -ex 00:01:19.885 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:19.886 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.886 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.886 ++ SPDK_TEST_NVMF=1 00:01:19.886 ++ SPDK_TEST_NVME_CLI=1 00:01:19.886 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.886 ++ SPDK_TEST_NVMF_NICS=e810 00:01:19.886 ++ SPDK_TEST_VFIOUSER=1 00:01:19.886 ++ SPDK_RUN_UBSAN=1 00:01:19.886 ++ NET_TYPE=phy 00:01:19.886 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:19.886 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:19.886 ++ RUN_NIGHTLY=1 00:01:19.886 + case $SPDK_TEST_NVMF_NICS in 00:01:19.886 + DRIVERS=ice 00:01:19.886 + [[ tcp == \r\d\m\a ]] 00:01:19.886 + [[ -n ice ]] 00:01:19.886 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:19.886 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:19.886 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:19.886 rmmod: ERROR: Module irdma is not currently loaded 00:01:19.886 rmmod: ERROR: Module i40iw is not currently loaded 00:01:19.886 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:19.886 + true 00:01:19.886 + for D in $DRIVERS 00:01:19.886 + sudo modprobe ice 00:01:19.886 + exit 0 00:01:19.896 [Pipeline] } 00:01:19.910 [Pipeline] // withEnv 00:01:19.915 [Pipeline] } 00:01:19.930 [Pipeline] // stage 00:01:19.939 [Pipeline] catchError 00:01:19.941 [Pipeline] { 00:01:19.957 [Pipeline] timeout 00:01:19.958 Timeout set to expire in 1 hr 0 min 00:01:19.960 [Pipeline] { 00:01:19.975 [Pipeline] stage 00:01:19.977 [Pipeline] { (Tests) 00:01:19.992 [Pipeline] sh 00:01:20.281 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.281 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.281 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.281 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:20.281 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.281 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.281 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:20.281 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.281 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.281 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.281 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:20.281 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.281 + source /etc/os-release 00:01:20.281 ++ NAME='Fedora Linux' 00:01:20.281 ++ VERSION='39 (Cloud Edition)' 00:01:20.281 ++ ID=fedora 00:01:20.281 ++ VERSION_ID=39 00:01:20.281 ++ VERSION_CODENAME= 00:01:20.281 ++ PLATFORM_ID=platform:f39 00:01:20.281 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:20.281 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.281 ++ LOGO=fedora-logo-icon 00:01:20.281 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:20.281 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.281 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:20.281 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.281 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.281 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.281 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:20.281 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.281 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:20.281 ++ SUPPORT_END=2024-11-12 00:01:20.281 ++ VARIANT='Cloud Edition' 00:01:20.281 ++ VARIANT_ID=cloud 00:01:20.281 + uname -a 00:01:20.281 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:20.281 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:21.217 Hugepages 00:01:21.217 node hugesize free / total 00:01:21.217 node0 1048576kB 0 / 0 00:01:21.217 node0 2048kB 0 / 0 00:01:21.217 node1 1048576kB 0 / 0 00:01:21.217 node1 2048kB 0 / 0 00:01:21.217 00:01:21.217 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:21.217 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:21.217 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:21.217 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:21.217 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:21.217 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:21.477 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:21.477 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:21.477 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:21.477 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:21.477 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:21.477 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:21.477 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:21.477 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:21.477 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:21.477 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:21.477 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:21.477 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:21.477 + rm -f /tmp/spdk-ld-path 00:01:21.477 + source autorun-spdk.conf 00:01:21.477 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.477 ++ SPDK_TEST_NVMF=1 00:01:21.477 ++ SPDK_TEST_NVME_CLI=1 00:01:21.477 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.477 ++ SPDK_TEST_NVMF_NICS=e810 00:01:21.477 ++ SPDK_TEST_VFIOUSER=1 00:01:21.477 ++ SPDK_RUN_UBSAN=1 00:01:21.477 ++ NET_TYPE=phy 00:01:21.477 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:21.477 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:21.477 ++ RUN_NIGHTLY=1 00:01:21.477 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:21.477 + [[ -n '' ]] 00:01:21.477 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:21.477 + for M in /var/spdk/build-*-manifest.txt 00:01:21.477 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:21.477 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:21.477 + for M in /var/spdk/build-*-manifest.txt 00:01:21.477 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:21.477 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:21.477 + for M in /var/spdk/build-*-manifest.txt 00:01:21.477 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:21.477 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:21.477 ++ uname 00:01:21.477 + [[ Linux == \L\i\n\u\x ]] 00:01:21.477 + sudo dmesg -T 00:01:21.477 + sudo dmesg --clear 00:01:21.477 + dmesg_pid=2086271 00:01:21.477 + [[ Fedora Linux == FreeBSD ]] 00:01:21.477 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.477 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.477 + sudo dmesg -Tw 00:01:21.477 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:21.477 + [[ -x /usr/src/fio-static/fio ]] 00:01:21.477 + export FIO_BIN=/usr/src/fio-static/fio 00:01:21.477 + FIO_BIN=/usr/src/fio-static/fio 00:01:21.477 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:21.477 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:21.477 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:21.477 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.477 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.477 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:21.477 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.477 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.477 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:21.477 Test configuration: 00:01:21.477 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.477 SPDK_TEST_NVMF=1 00:01:21.477 SPDK_TEST_NVME_CLI=1 00:01:21.477 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.477 SPDK_TEST_NVMF_NICS=e810 00:01:21.477 SPDK_TEST_VFIOUSER=1 00:01:21.477 SPDK_RUN_UBSAN=1 00:01:21.477 NET_TYPE=phy 00:01:21.477 SPDK_TEST_NATIVE_DPDK=main 00:01:21.477 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:21.477 RUN_NIGHTLY=1 04:37:12 -- common/autotest_common.sh@1688 -- $ [[ n == y ]] 00:01:21.477 04:37:12 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:21.477 04:37:12 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:21.477 04:37:12 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:21.477 04:37:12 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:21.477 04:37:12 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:21.477 04:37:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.477 04:37:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.477 04:37:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.477 04:37:12 -- paths/export.sh@5 -- $ export PATH 00:01:21.477 04:37:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.477 04:37:12 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:21.477 04:37:12 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:21.477 04:37:12 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730086632.XXXXXX 00:01:21.477 04:37:12 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730086632.NdzpG9 00:01:21.477 04:37:12 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:21.477 04:37:12 -- common/autobuild_common.sh@492 -- $ '[' -n main ']' 00:01:21.477 04:37:12 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:21.477 04:37:12 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:21.477 04:37:12 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:21.477 04:37:12 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:21.477 04:37:12 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:21.477 04:37:12 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:21.477 04:37:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.478 04:37:12 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:21.478 04:37:12 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:21.478 04:37:12 -- pm/common@17 -- $ local monitor 00:01:21.478 04:37:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.478 04:37:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.478 04:37:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.478 04:37:12 -- pm/common@21 -- $ date +%s 00:01:21.478 04:37:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.478 04:37:12 -- pm/common@21 -- $ date +%s 00:01:21.478 04:37:12 -- pm/common@25 -- $ sleep 1 00:01:21.478 04:37:12 -- pm/common@21 -- $ date +%s 00:01:21.478 04:37:12 -- pm/common@21 -- $ date +%s 00:01:21.478 04:37:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730086632 00:01:21.478 04:37:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730086632 00:01:21.478 04:37:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730086632 00:01:21.478 04:37:12 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730086632 00:01:21.738 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730086632_collect-vmstat.pm.log 00:01:21.738 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730086632_collect-cpu-load.pm.log 00:01:21.738 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730086632_collect-cpu-temp.pm.log 00:01:21.738 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730086632_collect-bmc-pm.bmc.pm.log 00:01:22.679 04:37:13 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:22.679 04:37:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:22.679 04:37:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:22.679 04:37:13 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.679 04:37:13 -- spdk/autobuild.sh@16 -- $ date -u 00:01:22.679 Mon Oct 28 03:37:13 AM UTC 2024 00:01:22.679 04:37:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:22.679 v25.01-pre-118-g169c3cd04 00:01:22.679 04:37:13 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:22.679 04:37:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:22.679 04:37:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:22.679 04:37:13 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:22.679 04:37:13 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:22.679 04:37:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.679 ************************************ 00:01:22.679 START TEST ubsan 00:01:22.679 ************************************ 00:01:22.679 04:37:13 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:22.679 using ubsan 00:01:22.679 00:01:22.679 real 0m0.000s 00:01:22.679 user 0m0.000s 00:01:22.679 sys 0m0.000s 00:01:22.679 04:37:13 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:22.679 04:37:13 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:22.679 ************************************ 00:01:22.679 END TEST ubsan 00:01:22.679 ************************************ 00:01:22.679 04:37:13 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:22.679 04:37:13 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:22.679 04:37:13 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:22.679 04:37:13 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:22.679 04:37:13 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:22.679 04:37:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.679 ************************************ 00:01:22.679 START TEST build_native_dpdk 00:01:22.679 ************************************ 00:01:22.679 04:37:13 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:22.679 04:37:13 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:22.679 04:37:13 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:22.679 04:37:13 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:22.679 04:37:13 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:22.679 04:37:13 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:22.679 04:37:13 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:22.679 04:37:13 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:22.679 04:37:13 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:22.679 04:37:13 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:22.679 04:37:13 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:22.679 04:37:13 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:22.679 04:37:13 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:22.679 04:37:13 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:22.679 04:37:13 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:22.680 6dad0bb5c8 event/cnxk: fix getwork write data on reconfig 00:01:22.680 b74f298f9b test/event: fix device stop 00:01:22.680 34e3ad3a1e eventdev: remove single event enqueue and dequeue 00:01:22.680 5079ede71e event/skeleton: remove single event enqueue and dequeue 00:01:22.680 a83fc0f4e1 event/cnxk: remove single event enqueue and dequeue 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc1 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.11.0-rc1 21.11.0 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc1 '<' 21.11.0 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:22.680 patching file config/rte_config.h 00:01:22.680 Hunk #1 succeeded at 71 (offset 12 lines). 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc1 24.07.0 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc1 '<' 24.07.0 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:22.680 04:37:13 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 24.11.0-rc1 24.07.0 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc1 '>=' 24.07.0 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:22.680 04:37:13 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:22.681 04:37:13 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:01:22.681 04:37:13 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:01:22.681 patching file drivers/bus/pci/linux/pci_uio.c 00:01:22.681 04:37:13 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:22.681 04:37:13 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:22.681 04:37:13 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:22.681 04:37:13 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:22.681 04:37:13 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:26.881 The Meson build system 00:01:26.881 Version: 1.5.0 00:01:26.881 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:26.881 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:26.881 Build type: native build 00:01:26.881 Project name: DPDK 00:01:26.881 Project version: 24.11.0-rc1 00:01:26.881 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:26.881 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:26.881 Host machine cpu family: x86_64 00:01:26.881 Host machine cpu: x86_64 00:01:26.881 Message: ## Building in Developer Mode ## 00:01:26.881 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:26.881 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:26.881 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:26.881 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:26.881 Program cat found: YES (/usr/bin/cat) 00:01:26.881 config/meson.build:119: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:26.881 Compiler for C supports arguments -march=native: YES 00:01:26.881 Checking for size of "void *" : 8 00:01:26.881 Checking for size of "void *" : 8 (cached) 00:01:26.881 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:26.881 Library m found: YES 00:01:26.881 Library numa found: YES 00:01:26.881 Has header "numaif.h" : YES 00:01:26.881 Library fdt found: NO 00:01:26.881 Library execinfo found: NO 00:01:26.881 Has header "execinfo.h" : YES 00:01:26.881 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:26.881 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:26.881 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:26.881 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:26.881 Run-time dependency openssl found: YES 3.1.1 00:01:26.881 Run-time dependency libpcap found: YES 1.10.4 00:01:26.881 Has header "pcap.h" with dependency libpcap: YES 00:01:26.881 Compiler for C supports arguments -Wcast-qual: YES 00:01:26.881 Compiler for C supports arguments -Wdeprecated: YES 00:01:26.881 Compiler for C supports arguments -Wformat: YES 00:01:26.881 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:26.881 Compiler for C supports arguments -Wformat-security: NO 00:01:26.881 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:26.881 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:26.881 Compiler for C supports arguments -Wnested-externs: YES 00:01:26.881 Compiler for C supports arguments -Wold-style-definition: YES 00:01:26.882 Compiler for C supports arguments -Wpointer-arith: YES 00:01:26.882 Compiler for C supports arguments -Wsign-compare: YES 00:01:26.882 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:26.882 Compiler for C supports arguments -Wundef: YES 00:01:26.882 Compiler for C supports arguments -Wwrite-strings: YES 00:01:26.882 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:26.882 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:26.882 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:26.882 Program objdump found: YES (/usr/bin/objdump) 00:01:26.882 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:01:26.882 Checking if "AVX512 checking" compiles: YES 00:01:26.882 Fetching value of define "__AVX512F__" : (undefined) 00:01:26.882 Fetching value of define "__SSE4_2__" : 1 00:01:26.882 Fetching value of define "__AES__" : 1 00:01:26.882 Fetching value of define "__AVX__" : 1 00:01:26.882 Fetching value of define "__AVX2__" : (undefined) 00:01:26.882 Fetching value of define "__AVX512BW__" : (undefined) 00:01:26.882 Fetching value of define "__AVX512CD__" : (undefined) 00:01:26.882 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:26.882 Fetching value of define "__AVX512F__" : (undefined) 00:01:26.882 Fetching value of define "__AVX512VL__" : (undefined) 00:01:26.882 Fetching value of define "__PCLMUL__" : 1 00:01:26.882 Fetching value of define "__RDRND__" : 1 00:01:26.882 Fetching value of define "__RDSEED__" : (undefined) 00:01:26.882 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:26.882 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:26.882 Message: lib/log: Defining dependency "log" 00:01:26.882 Message: lib/kvargs: Defining dependency "kvargs" 00:01:26.882 Message: lib/argparse: Defining dependency "argparse" 00:01:26.882 Message: lib/telemetry: Defining dependency "telemetry" 00:01:26.882 Checking for function "getentropy" : NO 00:01:26.882 Message: lib/eal: Defining dependency "eal" 00:01:26.882 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:26.882 Message: lib/ring: Defining dependency "ring" 00:01:26.882 Message: lib/rcu: Defining dependency "rcu" 00:01:26.882 Message: lib/mempool: Defining dependency "mempool" 00:01:26.882 Message: lib/mbuf: Defining dependency "mbuf" 00:01:26.882 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:26.882 Compiler for C supports arguments -mpclmul: YES 00:01:26.882 Compiler for C supports arguments -maes: YES 00:01:26.882 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:26.882 Message: lib/net: Defining dependency "net" 00:01:26.882 Message: lib/meter: Defining dependency "meter" 00:01:26.882 Message: lib/ethdev: Defining dependency "ethdev" 00:01:26.882 Message: lib/pci: Defining dependency "pci" 00:01:26.882 Message: lib/cmdline: Defining dependency "cmdline" 00:01:26.882 Message: lib/metrics: Defining dependency "metrics" 00:01:26.882 Message: lib/hash: Defining dependency "hash" 00:01:26.882 Message: lib/timer: Defining dependency "timer" 00:01:26.882 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:26.882 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:26.882 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:26.882 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:26.882 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:26.882 Message: lib/acl: Defining dependency "acl" 00:01:26.882 Message: lib/bbdev: Defining dependency "bbdev" 00:01:26.882 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:26.882 Run-time dependency libelf found: YES 0.191 00:01:26.882 Message: lib/bpf: Defining dependency "bpf" 00:01:26.882 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:26.882 Message: lib/compressdev: Defining dependency "compressdev" 00:01:26.882 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:26.882 Message: lib/distributor: Defining dependency "distributor" 00:01:26.882 Message: lib/dmadev: Defining dependency "dmadev" 00:01:26.882 Message: lib/efd: Defining dependency "efd" 00:01:26.882 Message: lib/eventdev: Defining dependency "eventdev" 00:01:26.882 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:26.882 Message: lib/gpudev: Defining dependency "gpudev" 00:01:26.882 Message: lib/gro: Defining dependency "gro" 00:01:26.882 Message: lib/gso: Defining dependency "gso" 00:01:26.882 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:26.882 Message: lib/jobstats: Defining dependency "jobstats" 00:01:26.882 Message: lib/latencystats: Defining dependency "latencystats" 00:01:26.882 Message: lib/lpm: Defining dependency "lpm" 00:01:26.882 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:26.882 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:26.882 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:26.882 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:26.882 Message: lib/member: Defining dependency "member" 00:01:26.882 Message: lib/pcapng: Defining dependency "pcapng" 00:01:26.882 Message: lib/power: Defining dependency "power" 00:01:26.882 Message: lib/rawdev: Defining dependency "rawdev" 00:01:26.882 Message: lib/regexdev: Defining dependency "regexdev" 00:01:26.882 Message: lib/mldev: Defining dependency "mldev" 00:01:26.882 Message: lib/rib: Defining dependency "rib" 00:01:26.882 Message: lib/reorder: Defining dependency "reorder" 00:01:26.882 Message: lib/sched: Defining dependency "sched" 00:01:26.882 Message: lib/security: Defining dependency "security" 00:01:26.882 Message: lib/stack: Defining dependency "stack" 00:01:26.882 Has header "linux/userfaultfd.h" : YES 00:01:26.882 Has header "linux/vduse.h" : YES 00:01:26.882 Message: lib/vhost: Defining dependency "vhost" 00:01:26.882 Message: lib/ipsec: Defining dependency "ipsec" 00:01:26.882 Message: lib/pdcp: Defining dependency "pdcp" 00:01:26.882 Message: lib/fib: Defining dependency "fib" 00:01:26.882 Message: lib/port: Defining dependency "port" 00:01:26.882 Message: lib/pdump: Defining dependency "pdump" 00:01:26.882 Message: lib/table: Defining dependency "table" 00:01:26.882 Message: lib/pipeline: Defining dependency "pipeline" 00:01:26.882 Message: lib/graph: Defining dependency "graph" 00:01:26.882 Message: lib/node: Defining dependency "node" 00:01:26.882 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:26.882 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:26.882 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:26.882 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:26.882 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:26.882 Compiler for C supports arguments -Wno-unused-value: YES 00:01:26.882 Compiler for C supports arguments -Wno-format: YES 00:01:26.882 Compiler for C supports arguments -Wno-format-security: YES 00:01:26.882 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:26.882 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:26.882 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:28.796 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:28.796 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:28.796 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:28.796 Has header "sys/epoll.h" : YES 00:01:28.796 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:28.796 Configuring doxy-api-html.conf using configuration 00:01:28.796 doc/api/meson.build:54: WARNING: The variable(s) 'DTS_API_MAIN_PAGE' in the input file 'doc/api/doxy-api.conf.in' are not present in the given configuration data. 00:01:28.796 Configuring doxy-api-man.conf using configuration 00:01:28.796 doc/api/meson.build:67: WARNING: The variable(s) 'DTS_API_MAIN_PAGE' in the input file 'doc/api/doxy-api.conf.in' are not present in the given configuration data. 00:01:28.796 Program mandb found: YES (/usr/bin/mandb) 00:01:28.796 Program sphinx-build found: NO 00:01:28.796 Program sphinx-build found: NO 00:01:28.796 Configuring rte_build_config.h using configuration 00:01:28.796 Message: 00:01:28.796 ================= 00:01:28.796 Applications Enabled 00:01:28.796 ================= 00:01:28.796 00:01:28.796 apps: 00:01:28.796 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:28.796 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:28.796 test-pmd, test-regex, test-sad, test-security-perf, 00:01:28.796 00:01:28.796 Message: 00:01:28.796 ================= 00:01:28.796 Libraries Enabled 00:01:28.796 ================= 00:01:28.796 00:01:28.796 libs: 00:01:28.796 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:01:28.796 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:01:28.796 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:01:28.796 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:01:28.796 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:01:28.796 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:01:28.796 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:01:28.796 graph, node, 00:01:28.796 00:01:28.796 Message: 00:01:28.796 =============== 00:01:28.796 Drivers Enabled 00:01:28.796 =============== 00:01:28.796 00:01:28.796 common: 00:01:28.796 00:01:28.796 bus: 00:01:28.796 pci, vdev, 00:01:28.796 mempool: 00:01:28.796 ring, 00:01:28.796 dma: 00:01:28.796 00:01:28.796 net: 00:01:28.796 i40e, 00:01:28.796 raw: 00:01:28.796 00:01:28.796 crypto: 00:01:28.796 00:01:28.796 compress: 00:01:28.796 00:01:28.796 regex: 00:01:28.796 00:01:28.796 ml: 00:01:28.796 00:01:28.796 vdpa: 00:01:28.796 00:01:28.796 event: 00:01:28.796 00:01:28.796 baseband: 00:01:28.796 00:01:28.796 gpu: 00:01:28.796 00:01:28.796 00:01:28.796 Message: 00:01:28.796 ================= 00:01:28.796 Content Skipped 00:01:28.796 ================= 00:01:28.796 00:01:28.796 apps: 00:01:28.796 00:01:28.796 libs: 00:01:28.796 00:01:28.796 drivers: 00:01:28.796 common/cpt: not in enabled drivers build config 00:01:28.796 common/dpaax: not in enabled drivers build config 00:01:28.796 common/iavf: not in enabled drivers build config 00:01:28.796 common/idpf: not in enabled drivers build config 00:01:28.796 common/ionic: not in enabled drivers build config 00:01:28.796 common/mvep: not in enabled drivers build config 00:01:28.796 common/octeontx: not in enabled drivers build config 00:01:28.796 bus/auxiliary: not in enabled drivers build config 00:01:28.796 bus/cdx: not in enabled drivers build config 00:01:28.796 bus/dpaa: not in enabled drivers build config 00:01:28.796 bus/fslmc: not in enabled drivers build config 00:01:28.796 bus/ifpga: not in enabled drivers build config 00:01:28.796 bus/platform: not in enabled drivers build config 00:01:28.796 bus/uacce: not in enabled drivers build config 00:01:28.797 bus/vmbus: not in enabled drivers build config 00:01:28.797 common/cnxk: not in enabled drivers build config 00:01:28.797 common/mlx5: not in enabled drivers build config 00:01:28.797 common/nfp: not in enabled drivers build config 00:01:28.797 common/nitrox: not in enabled drivers build config 00:01:28.797 common/qat: not in enabled drivers build config 00:01:28.797 common/sfc_efx: not in enabled drivers build config 00:01:28.797 mempool/bucket: not in enabled drivers build config 00:01:28.797 mempool/cnxk: not in enabled drivers build config 00:01:28.797 mempool/dpaa: not in enabled drivers build config 00:01:28.797 mempool/dpaa2: not in enabled drivers build config 00:01:28.797 mempool/octeontx: not in enabled drivers build config 00:01:28.797 mempool/stack: not in enabled drivers build config 00:01:28.797 dma/cnxk: not in enabled drivers build config 00:01:28.797 dma/dpaa: not in enabled drivers build config 00:01:28.797 dma/dpaa2: not in enabled drivers build config 00:01:28.797 dma/hisilicon: not in enabled drivers build config 00:01:28.797 dma/idxd: not in enabled drivers build config 00:01:28.797 dma/ioat: not in enabled drivers build config 00:01:28.797 dma/odm: not in enabled drivers build config 00:01:28.797 dma/skeleton: not in enabled drivers build config 00:01:28.797 net/af_packet: not in enabled drivers build config 00:01:28.797 net/af_xdp: not in enabled drivers build config 00:01:28.797 net/ark: not in enabled drivers build config 00:01:28.797 net/atlantic: not in enabled drivers build config 00:01:28.797 net/avp: not in enabled drivers build config 00:01:28.797 net/axgbe: not in enabled drivers build config 00:01:28.797 net/bnx2x: not in enabled drivers build config 00:01:28.797 net/bnxt: not in enabled drivers build config 00:01:28.797 net/bonding: not in enabled drivers build config 00:01:28.797 net/cnxk: not in enabled drivers build config 00:01:28.797 net/cpfl: not in enabled drivers build config 00:01:28.797 net/cxgbe: not in enabled drivers build config 00:01:28.797 net/dpaa: not in enabled drivers build config 00:01:28.797 net/dpaa2: not in enabled drivers build config 00:01:28.797 net/e1000: not in enabled drivers build config 00:01:28.797 net/ena: not in enabled drivers build config 00:01:28.797 net/enetc: not in enabled drivers build config 00:01:28.797 net/enetfec: not in enabled drivers build config 00:01:28.797 net/enic: not in enabled drivers build config 00:01:28.797 net/failsafe: not in enabled drivers build config 00:01:28.797 net/fm10k: not in enabled drivers build config 00:01:28.797 net/gve: not in enabled drivers build config 00:01:28.797 net/hinic: not in enabled drivers build config 00:01:28.797 net/hns3: not in enabled drivers build config 00:01:28.797 net/iavf: not in enabled drivers build config 00:01:28.797 net/ice: not in enabled drivers build config 00:01:28.797 net/idpf: not in enabled drivers build config 00:01:28.797 net/igc: not in enabled drivers build config 00:01:28.797 net/ionic: not in enabled drivers build config 00:01:28.797 net/ipn3ke: not in enabled drivers build config 00:01:28.797 net/ixgbe: not in enabled drivers build config 00:01:28.797 net/mana: not in enabled drivers build config 00:01:28.797 net/memif: not in enabled drivers build config 00:01:28.797 net/mlx4: not in enabled drivers build config 00:01:28.797 net/mlx5: not in enabled drivers build config 00:01:28.797 net/mvneta: not in enabled drivers build config 00:01:28.797 net/mvpp2: not in enabled drivers build config 00:01:28.797 net/netvsc: not in enabled drivers build config 00:01:28.797 net/nfb: not in enabled drivers build config 00:01:28.797 net/nfp: not in enabled drivers build config 00:01:28.797 net/ngbe: not in enabled drivers build config 00:01:28.797 net/ntnic: not in enabled drivers build config 00:01:28.797 net/null: not in enabled drivers build config 00:01:28.797 net/octeontx: not in enabled drivers build config 00:01:28.797 net/octeon_ep: not in enabled drivers build config 00:01:28.797 net/pcap: not in enabled drivers build config 00:01:28.797 net/pfe: not in enabled drivers build config 00:01:28.797 net/qede: not in enabled drivers build config 00:01:28.797 net/ring: not in enabled drivers build config 00:01:28.797 net/sfc: not in enabled drivers build config 00:01:28.797 net/softnic: not in enabled drivers build config 00:01:28.797 net/tap: not in enabled drivers build config 00:01:28.797 net/thunderx: not in enabled drivers build config 00:01:28.797 net/txgbe: not in enabled drivers build config 00:01:28.797 net/vdev_netvsc: not in enabled drivers build config 00:01:28.797 net/vhost: not in enabled drivers build config 00:01:28.797 net/virtio: not in enabled drivers build config 00:01:28.797 net/vmxnet3: not in enabled drivers build config 00:01:28.797 raw/cnxk_bphy: not in enabled drivers build config 00:01:28.797 raw/cnxk_gpio: not in enabled drivers build config 00:01:28.797 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:28.797 raw/ifpga: not in enabled drivers build config 00:01:28.797 raw/ntb: not in enabled drivers build config 00:01:28.797 raw/skeleton: not in enabled drivers build config 00:01:28.797 crypto/armv8: not in enabled drivers build config 00:01:28.797 crypto/bcmfs: not in enabled drivers build config 00:01:28.797 crypto/caam_jr: not in enabled drivers build config 00:01:28.797 crypto/ccp: not in enabled drivers build config 00:01:28.797 crypto/cnxk: not in enabled drivers build config 00:01:28.797 crypto/dpaa_sec: not in enabled drivers build config 00:01:28.797 crypto/dpaa2_sec: not in enabled drivers build config 00:01:28.797 crypto/ionic: not in enabled drivers build config 00:01:28.797 crypto/ipsec_mb: not in enabled drivers build config 00:01:28.797 crypto/mlx5: not in enabled drivers build config 00:01:28.797 crypto/mvsam: not in enabled drivers build config 00:01:28.797 crypto/nitrox: not in enabled drivers build config 00:01:28.797 crypto/null: not in enabled drivers build config 00:01:28.797 crypto/octeontx: not in enabled drivers build config 00:01:28.797 crypto/openssl: not in enabled drivers build config 00:01:28.797 crypto/scheduler: not in enabled drivers build config 00:01:28.797 crypto/uadk: not in enabled drivers build config 00:01:28.797 crypto/virtio: not in enabled drivers build config 00:01:28.797 compress/isal: not in enabled drivers build config 00:01:28.797 compress/mlx5: not in enabled drivers build config 00:01:28.797 compress/nitrox: not in enabled drivers build config 00:01:28.797 compress/octeontx: not in enabled drivers build config 00:01:28.797 compress/uadk: not in enabled drivers build config 00:01:28.797 compress/zlib: not in enabled drivers build config 00:01:28.797 regex/mlx5: not in enabled drivers build config 00:01:28.797 regex/cn9k: not in enabled drivers build config 00:01:28.797 ml/cnxk: not in enabled drivers build config 00:01:28.797 vdpa/ifc: not in enabled drivers build config 00:01:28.797 vdpa/mlx5: not in enabled drivers build config 00:01:28.797 vdpa/nfp: not in enabled drivers build config 00:01:28.797 vdpa/sfc: not in enabled drivers build config 00:01:28.797 event/cnxk: not in enabled drivers build config 00:01:28.797 event/dlb2: not in enabled drivers build config 00:01:28.797 event/dpaa: not in enabled drivers build config 00:01:28.797 event/dpaa2: not in enabled drivers build config 00:01:28.797 event/dsw: not in enabled drivers build config 00:01:28.797 event/opdl: not in enabled drivers build config 00:01:28.797 event/skeleton: not in enabled drivers build config 00:01:28.797 event/sw: not in enabled drivers build config 00:01:28.797 event/octeontx: not in enabled drivers build config 00:01:28.797 baseband/acc: not in enabled drivers build config 00:01:28.797 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:28.797 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:28.797 baseband/la12xx: not in enabled drivers build config 00:01:28.797 baseband/null: not in enabled drivers build config 00:01:28.797 baseband/turbo_sw: not in enabled drivers build config 00:01:28.797 gpu/cuda: not in enabled drivers build config 00:01:28.797 00:01:28.797 00:01:28.797 Build targets in project: 224 00:01:28.797 00:01:28.797 DPDK 24.11.0-rc1 00:01:28.797 00:01:28.797 User defined options 00:01:28.797 libdir : lib 00:01:28.797 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.797 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:28.797 c_link_args : 00:01:28.797 enable_docs : false 00:01:28.797 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:28.797 enable_kmods : false 00:01:28.797 machine : native 00:01:28.797 tests : false 00:01:28.797 00:01:28.797 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:28.797 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:28.797 04:37:19 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:28.797 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:28.797 [1/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:28.797 [2/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:28.797 [3/724] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:28.797 [4/724] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:28.797 [5/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:29.056 [6/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:29.056 [7/724] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:29.056 [8/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:29.056 [9/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:29.056 [10/724] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:29.056 [11/724] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:29.056 [12/724] Linking static target lib/librte_kvargs.a 00:01:29.056 [13/724] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:29.056 [14/724] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:29.056 [15/724] Linking static target lib/librte_log.a 00:01:29.318 [16/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:29.318 [17/724] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:29.318 [18/724] Linking static target lib/librte_argparse.a 00:01:29.581 [19/724] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.843 [20/724] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.843 [21/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:29.843 [22/724] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o 00:01:29.843 [23/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:29.843 [24/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:30.115 [25/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:30.115 [26/724] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:30.115 [27/724] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:30.115 [28/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:30.115 [29/724] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:30.115 [30/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:30.115 [31/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:30.115 [32/724] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:30.115 [33/724] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:30.115 [34/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:30.115 [35/724] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:30.115 [36/724] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.115 [37/724] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:30.115 [38/724] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:30.115 [39/724] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:30.115 [40/724] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:30.115 [41/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:30.115 [42/724] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:30.115 [43/724] Linking target lib/librte_log.so.25.0 00:01:30.115 [44/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:30.115 [45/724] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:30.115 [46/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:30.115 [47/724] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:30.116 [48/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:30.116 [49/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:30.116 [50/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:30.116 [51/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:30.116 [52/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:30.116 [53/724] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:30.116 [54/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:30.116 [55/724] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:30.375 [56/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:30.375 [57/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:30.375 [58/724] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:30.375 [59/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:30.375 [60/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:30.375 [61/724] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:01:30.375 [62/724] Linking target lib/librte_kvargs.so.25.0 00:01:30.639 [63/724] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:30.639 [64/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:30.639 [65/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:30.639 [66/724] Linking target lib/librte_argparse.so.25.0 00:01:30.639 [67/724] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:01:30.639 [68/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:30.639 [69/724] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:30.639 [70/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:30.639 [71/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:30.902 [72/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:30.902 [73/724] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:31.166 [74/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:31.166 [75/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:31.166 [76/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:31.166 [77/724] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:01:31.166 [78/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:31.166 [79/724] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:31.166 [80/724] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:31.166 [81/724] Linking static target lib/librte_pci.a 00:01:31.166 [82/724] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:31.166 [83/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:31.426 [84/724] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:31.426 [85/724] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:31.426 [86/724] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:31.426 [87/724] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:31.426 [88/724] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:31.426 [89/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:31.426 [90/724] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:31.426 [91/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:31.426 [92/724] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:31.426 [93/724] Linking static target lib/librte_ring.a 00:01:31.426 [94/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:31.426 [95/724] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:31.426 [96/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:31.426 [97/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:31.426 [98/724] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:31.426 [99/724] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:31.426 [100/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:31.426 [101/724] Linking static target lib/librte_meter.a 00:01:31.426 [102/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:31.426 [103/724] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:31.426 [104/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:31.426 [105/724] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:31.426 [106/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:31.692 [107/724] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:31.692 [108/724] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:31.692 [109/724] Linking static target lib/librte_telemetry.a 00:01:31.692 [110/724] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.692 [111/724] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:31.692 [112/724] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:31.692 [113/724] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:31.692 [114/724] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:31.692 [115/724] Linking static target lib/librte_net.a 00:01:31.692 [116/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:31.692 [117/724] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:31.958 [118/724] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.958 [119/724] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.958 [120/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:31.958 [121/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:31.958 [122/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:31.958 [123/724] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:31.958 [124/724] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:31.958 [125/724] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:32.225 [126/724] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.225 [127/724] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:32.225 [128/724] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:32.225 [129/724] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.225 [130/724] Linking static target lib/librte_mempool.a 00:01:32.225 [131/724] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:32.225 [132/724] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:32.225 [133/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:32.225 [134/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:32.225 [135/724] Linking target lib/librte_telemetry.so.25.0 00:01:32.485 [136/724] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:32.485 [137/724] Linking static target lib/librte_eal.a 00:01:32.485 [138/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:32.485 [139/724] Linking static target lib/librte_cmdline.a 00:01:32.485 [140/724] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:32.485 [141/724] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:32.485 [142/724] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:32.485 [143/724] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:32.485 [144/724] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:01:32.485 [145/724] Linking static target lib/librte_cfgfile.a 00:01:32.746 [146/724] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:32.746 [147/724] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:32.746 [148/724] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:32.746 [149/724] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:32.746 [150/724] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:32.746 [151/724] Linking static target lib/librte_metrics.a 00:01:32.746 [152/724] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:32.746 [153/724] Linking static target lib/librte_rcu.a 00:01:32.746 [154/724] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:33.013 [155/724] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:33.013 [156/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:33.013 [157/724] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:33.013 [158/724] Linking static target lib/librte_bitratestats.a 00:01:33.013 [159/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:33.013 [160/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:33.013 [161/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:33.013 [162/724] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.279 [163/724] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:33.279 [164/724] Linking static target lib/librte_mbuf.a 00:01:33.279 [165/724] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:33.279 [166/724] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.279 [167/724] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:33.279 [168/724] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:33.279 [169/724] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.279 [170/724] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.279 [171/724] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:33.279 [172/724] Linking static target lib/librte_timer.a 00:01:33.279 [173/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:33.279 [174/724] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.538 [175/724] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:33.538 [176/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:33.538 [177/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:33.538 [178/724] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:33.538 [179/724] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:33.803 [180/724] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:33.803 [181/724] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:33.803 [182/724] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:33.803 [183/724] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.803 [184/724] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:33.803 [185/724] Linking static target lib/librte_bbdev.a 00:01:33.803 [186/724] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:33.803 [187/724] Linking static target lib/librte_compressdev.a 00:01:33.803 [188/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:33.803 [189/724] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:34.066 [190/724] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.066 [191/724] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:34.066 [192/724] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:34.066 [193/724] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:34.066 [194/724] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.330 [195/724] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:34.330 [196/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:34.592 [197/724] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.592 [198/724] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.592 [199/724] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:34.592 [200/724] Linking static target lib/librte_distributor.a 00:01:34.592 [201/724] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:34.592 [202/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:34.592 [203/724] Linking static target lib/librte_dmadev.a 00:01:34.592 [204/724] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:34.592 [205/724] Linking static target lib/librte_bpf.a 00:01:34.859 [206/724] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:34.859 [207/724] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:34.859 [208/724] Linking static target lib/librte_dispatcher.a 00:01:34.859 [209/724] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:34.859 [210/724] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:34.859 [211/724] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:34.859 [212/724] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:35.120 [213/724] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:35.120 [214/724] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:35.120 [215/724] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:35.120 [216/724] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:35.120 [217/724] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:35.120 [218/724] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:35.120 [219/724] Linking static target lib/librte_gpudev.a 00:01:35.120 [220/724] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:35.120 [221/724] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.120 [222/724] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:35.120 [223/724] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:35.120 [224/724] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:35.120 [225/724] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:35.120 [226/724] Linking static target lib/librte_gro.a 00:01:35.120 [227/724] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.120 [228/724] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:35.120 [229/724] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:35.120 [230/724] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:35.395 [231/724] Linking static target lib/librte_jobstats.a 00:01:35.395 [232/724] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:35.395 [233/724] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:35.395 [234/724] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:35.395 [235/724] Linking static target lib/librte_gso.a 00:01:35.395 [236/724] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.395 [237/724] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:35.656 [238/724] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.656 [239/724] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.656 [240/724] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:35.656 [241/724] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:35.656 [242/724] Linking static target lib/librte_latencystats.a 00:01:35.656 [243/724] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:35.656 [244/724] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:35.656 [245/724] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.656 [246/724] Linking static target lib/librte_ip_frag.a 00:01:35.656 [247/724] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:35.656 [248/724] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.922 [249/724] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:35.922 [250/724] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:35.922 [251/724] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:35.922 [252/724] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:35.922 [253/724] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:35.922 [254/724] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:35.922 [255/724] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:35.922 [256/724] Linking static target lib/librte_efd.a 00:01:35.923 [257/724] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.923 [258/724] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:36.185 [259/724] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:36.185 [260/724] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.185 [261/724] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:36.185 [262/724] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:36.445 [263/724] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:36.445 [264/724] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.445 [265/724] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:36.445 [266/724] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:36.445 [267/724] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:36.445 [268/724] Linking static target lib/librte_lpm.a 00:01:36.445 [269/724] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.445 [270/724] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:36.445 [271/724] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:36.445 [272/724] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:36.710 [273/724] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:36.710 [274/724] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:36.710 [275/724] Linking static target lib/librte_regexdev.a 00:01:36.710 [276/724] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:36.710 [277/724] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:36.710 [278/724] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:36.710 [279/724] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:36.710 [280/724] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:36.710 [281/724] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:36.710 [282/724] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:36.976 [283/724] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:36.976 [284/724] Linking static target lib/librte_pcapng.a 00:01:36.976 [285/724] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:36.976 [286/724] Linking static target lib/librte_rawdev.a 00:01:36.976 [287/724] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:36.976 [288/724] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:36.976 [289/724] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:36.976 [290/724] Linking static target lib/librte_stack.a 00:01:36.976 [291/724] Linking static target lib/librte_power.a 00:01:36.976 [292/724] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:36.976 [293/724] Linking static target lib/librte_mldev.a 00:01:36.976 [294/724] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:36.976 [295/724] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.976 [296/724] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:36.976 [297/724] Linking static target lib/acl/libavx2_tmp.a 00:01:37.240 [298/724] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:37.240 [299/724] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:37.240 [300/724] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:37.240 [301/724] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:37.240 [302/724] Linking static target lib/librte_reorder.a 00:01:37.240 [303/724] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.240 [304/724] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.503 [305/724] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:37.503 [306/724] Linking static target lib/librte_rib.a 00:01:37.503 [307/724] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:37.503 [308/724] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:37.503 [309/724] Linking static target lib/librte_security.a 00:01:37.503 [310/724] Linking static target lib/librte_cryptodev.a 00:01:37.503 [311/724] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:37.503 [312/724] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:37.503 [313/724] Linking static target lib/librte_hash.a 00:01:37.503 [314/724] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:37.503 [315/724] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.763 [316/724] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:37.763 [317/724] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:37.763 [318/724] Linking static target lib/acl/libavx512_tmp.a 00:01:37.763 [319/724] Linking static target lib/librte_acl.a 00:01:37.763 [320/724] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.763 [321/724] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:37.763 [322/724] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:37.763 [323/724] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:38.024 [324/724] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:38.024 [325/724] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:38.024 [326/724] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.024 [327/724] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.024 [328/724] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:38.024 [329/724] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:38.024 [330/724] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:38.024 [331/724] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:38.024 [332/724] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.024 [333/724] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:38.024 [334/724] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.024 [335/724] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:38.286 [336/724] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:38.286 [337/724] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.286 [338/724] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:38.286 [339/724] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:38.551 [340/724] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:38.551 [341/724] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.551 [342/724] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:38.551 [343/724] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:38.551 [344/724] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:38.551 [345/724] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:39.131 [346/724] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:39.131 [347/724] Linking static target lib/librte_eventdev.a 00:01:39.131 [348/724] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:39.131 [349/724] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:39.131 [350/724] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:39.131 [351/724] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:39.131 [352/724] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:39.398 [353/724] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:39.398 [354/724] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:39.398 [355/724] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:39.398 [356/724] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:39.398 [357/724] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:39.398 [358/724] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:39.398 [359/724] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:39.398 [360/724] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:39.398 [361/724] Linking static target lib/librte_member.a 00:01:39.398 [362/724] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:39.398 [363/724] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.398 [364/724] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.398 [365/724] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:39.659 [366/724] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:39.659 [367/724] Linking static target lib/librte_sched.a 00:01:39.659 [368/724] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:39.659 [369/724] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:39.659 [370/724] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:39.659 [371/724] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:39.659 [372/724] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:39.659 [373/724] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:39.659 [374/724] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:39.659 [375/724] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:39.659 [376/724] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:39.920 [377/724] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:39.920 [378/724] Linking static target lib/librte_ethdev.a 00:01:39.920 [379/724] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:39.920 [380/724] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:39.920 [381/724] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.920 [382/724] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:40.183 [383/724] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:40.183 [384/724] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.183 [385/724] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:40.448 [386/724] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:40.448 [387/724] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:40.448 [388/724] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:40.448 [389/724] Linking static target lib/librte_pdump.a 00:01:40.448 [390/724] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:40.448 [391/724] Linking static target lib/librte_fib.a 00:01:40.718 [392/724] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:40.718 [393/724] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:40.718 [394/724] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:40.718 [395/724] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:40.718 [396/724] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:40.718 [397/724] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:40.718 [398/724] Linking static target lib/librte_ipsec.a 00:01:40.718 [399/724] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:40.718 [400/724] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:40.718 [401/724] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:40.718 [402/724] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:40.718 [403/724] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:40.718 [404/724] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:40.981 [405/724] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:40.981 [406/724] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.981 [407/724] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:40.981 [408/724] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:40.981 [409/724] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:40.981 [410/724] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:40.981 [411/724] Linking static target lib/librte_pdcp.a 00:01:40.981 [412/724] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:40.981 [413/724] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:41.245 [414/724] Linking static target lib/librte_table.a 00:01:41.245 [415/724] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:41.245 [416/724] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.245 [417/724] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:41.245 [418/724] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:41.245 [419/724] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.245 [420/724] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:41.508 [421/724] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:41.508 [422/724] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:41.508 [423/724] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:41.774 [424/724] Linking static target lib/librte_graph.a 00:01:41.774 [425/724] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.774 [426/724] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:41.774 [427/724] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:41.774 [428/724] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:41.774 [429/724] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:41.774 [430/724] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:41.774 [431/724] Linking static target lib/librte_port.a 00:01:41.774 [432/724] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:42.038 [433/724] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:42.038 [434/724] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:42.038 [435/724] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:42.038 [436/724] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:42.038 [437/724] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:42.038 [438/724] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:42.038 [439/724] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:42.038 [440/724] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:42.306 [441/724] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:42.306 [442/724] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.565 [443/724] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:42.565 [444/724] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:42.565 [445/724] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:42.565 [446/724] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:42.565 [447/724] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:42.565 [448/724] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.565 [449/724] Linking static target drivers/librte_bus_vdev.a 00:01:42.565 [450/724] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:42.565 [451/724] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.830 [452/724] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:42.830 [453/724] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:42.830 [454/724] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.830 [455/724] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:42.830 [456/724] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:42.830 [457/724] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:42.830 [458/724] Linking static target lib/librte_node.a 00:01:42.830 [459/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:42.830 [460/724] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:43.114 [461/724] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:43.114 [462/724] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:43.114 [463/724] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:43.114 [464/724] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.114 [465/724] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:43.114 [466/724] Linking static target drivers/librte_bus_pci.a 00:01:43.114 [467/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:43.114 [468/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:43.114 [469/724] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:43.114 [470/724] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:43.395 [471/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:43.395 [472/724] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:43.395 [473/724] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:43.395 [474/724] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:43.395 [475/724] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:43.395 [476/724] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:43.395 [477/724] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.395 [478/724] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.395 [479/724] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:43.697 [480/724] Linking target lib/librte_eal.so.25.0 00:01:43.697 [481/724] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:43.697 [482/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:43.697 [483/724] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:43.697 [484/724] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:43.697 [485/724] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:43.697 [486/724] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:43.697 [487/724] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:43.697 [488/724] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:43.697 [489/724] Linking static target drivers/librte_mempool_ring.a 00:01:43.697 [490/724] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:43.697 [491/724] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:01:43.967 [492/724] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:43.967 [493/724] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.967 [494/724] Linking target lib/librte_meter.so.25.0 00:01:43.967 [495/724] Linking target lib/librte_ring.so.25.0 00:01:43.967 [496/724] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:43.967 [497/724] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:43.967 [498/724] Linking target lib/librte_pci.so.25.0 00:01:43.967 [499/724] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:43.967 [500/724] Linking target lib/librte_timer.so.25.0 00:01:43.967 [501/724] Linking target lib/librte_acl.so.25.0 00:01:43.967 [502/724] Linking target lib/librte_cfgfile.so.25.0 00:01:44.232 [503/724] Linking target lib/librte_dmadev.so.25.0 00:01:44.232 [504/724] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:01:44.232 [505/724] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:01:44.232 [506/724] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:44.232 [507/724] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:01:44.232 [508/724] Linking target lib/librte_jobstats.so.25.0 00:01:44.232 [509/724] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:44.232 [510/724] Linking target lib/librte_rcu.so.25.0 00:01:44.232 [511/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:44.232 [512/724] Linking target lib/librte_mempool.so.25.0 00:01:44.232 [513/724] Linking target lib/librte_rawdev.so.25.0 00:01:44.232 [514/724] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:01:44.232 [515/724] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:44.232 [516/724] Linking target lib/librte_stack.so.25.0 00:01:44.232 [517/724] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:01:44.232 [518/724] Linking target drivers/librte_bus_vdev.so.25.0 00:01:44.232 [519/724] Linking target drivers/librte_bus_pci.so.25.0 00:01:44.232 [520/724] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:01:44.494 [521/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:44.494 [522/724] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:44.494 [523/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:44.494 [524/724] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:01:44.494 [525/724] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:44.494 [526/724] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:01:44.494 [527/724] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:44.494 [528/724] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:44.494 [529/724] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:01:44.494 [530/724] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:01:44.494 [531/724] Linking target drivers/librte_mempool_ring.so.25.0 00:01:44.494 [532/724] Linking target lib/librte_mbuf.so.25.0 00:01:44.757 [533/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:44.757 [534/724] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:44.757 [535/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:44.757 [536/724] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:01:44.757 [537/724] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:44.757 [538/724] Linking target lib/librte_net.so.25.0 00:01:45.020 [539/724] Linking target lib/librte_bbdev.so.25.0 00:01:45.020 [540/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:45.020 [541/724] Linking target lib/librte_compressdev.so.25.0 00:01:45.020 [542/724] Linking target lib/librte_distributor.so.25.0 00:01:45.020 [543/724] Linking target lib/librte_cryptodev.so.25.0 00:01:45.020 [544/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:45.020 [545/724] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:45.020 [546/724] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:01:45.281 [547/724] Linking target lib/librte_gpudev.so.25.0 00:01:45.281 [548/724] Linking target lib/librte_regexdev.so.25.0 00:01:45.281 [549/724] Linking target lib/librte_mldev.so.25.0 00:01:45.281 [550/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:45.281 [551/724] Linking target lib/librte_hash.so.25.0 00:01:45.281 [552/724] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:45.281 [553/724] Linking target lib/librte_cmdline.so.25.0 00:01:45.281 [554/724] Linking target lib/librte_rib.so.25.0 00:01:45.281 [555/724] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:45.281 [556/724] Linking target lib/librte_reorder.so.25.0 00:01:45.281 [557/724] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:45.281 [558/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:45.281 [559/724] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:45.281 [560/724] Linking target lib/librte_sched.so.25.0 00:01:45.281 [561/724] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:45.281 [562/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:45.281 [563/724] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:45.281 [564/724] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:01:45.281 [565/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:45.281 [566/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:45.281 [567/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:45.281 [568/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:45.281 [569/724] Linking target lib/librte_security.so.25.0 00:01:45.543 [570/724] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:45.543 [571/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:45.543 [572/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:45.543 [573/724] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:01:45.543 [574/724] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:01:45.543 [575/724] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:01:45.543 [576/724] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:45.543 [577/724] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:01:45.543 [578/724] Linking target lib/librte_efd.so.25.0 00:01:45.543 [579/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:45.543 [580/724] Linking target lib/librte_lpm.so.25.0 00:01:45.543 [581/724] Linking target lib/librte_member.so.25.0 00:01:45.543 [582/724] Linking target lib/librte_fib.so.25.0 00:01:45.543 [583/724] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:01:45.543 [584/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:45.543 [585/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:45.543 [586/724] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:45.808 [587/724] Linking target lib/librte_ipsec.so.25.0 00:01:45.808 [588/724] Linking target lib/librte_pdcp.so.25.0 00:01:45.808 [589/724] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:45.808 [590/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:45.808 [591/724] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:01:45.809 [592/724] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:01:45.809 [593/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:45.809 [594/724] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:46.073 [595/724] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:46.073 [596/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:46.073 [597/724] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:46.333 [598/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:46.333 [599/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:46.333 [600/724] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:46.333 [601/724] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:46.592 [602/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:46.592 [603/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:46.592 [604/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:46.592 [605/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:46.592 [606/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:46.592 [607/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:46.855 [608/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:46.855 [609/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:46.855 [610/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:46.855 [611/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:46.855 [612/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:46.855 [613/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:46.855 [614/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:46.855 [615/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:46.855 [616/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:46.855 [617/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:46.855 [618/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:47.117 [619/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:47.117 [620/724] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:47.117 [621/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:47.118 [622/724] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:47.118 [623/724] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:47.380 [624/724] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:47.639 [625/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:47.639 [626/724] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:47.639 [627/724] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:47.639 [628/724] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:47.639 [629/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:47.639 [630/724] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:47.898 [631/724] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:47.898 [632/724] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:47.898 [633/724] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:47.898 [634/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:47.898 [635/724] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:47.898 [636/724] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:47.898 [637/724] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:48.157 [638/724] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:48.157 [639/724] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:48.157 [640/724] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:48.157 [641/724] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.157 [642/724] Linking target lib/librte_ethdev.so.25.0 00:01:48.416 [643/724] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:01:48.416 [644/724] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:48.416 [645/724] Linking target lib/librte_bpf.so.25.0 00:01:48.416 [646/724] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:48.416 [647/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:48.416 [648/724] Linking target lib/librte_pcapng.so.25.0 00:01:48.416 [649/724] Linking target lib/librte_gso.so.25.0 00:01:48.416 [650/724] Linking target lib/librte_metrics.so.25.0 00:01:48.416 [651/724] Linking target lib/librte_ip_frag.so.25.0 00:01:48.416 [652/724] Linking target lib/librte_gro.so.25.0 00:01:48.416 [653/724] Linking target lib/librte_power.so.25.0 00:01:48.416 [654/724] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:48.416 [655/724] Linking target lib/librte_eventdev.so.25.0 00:01:48.416 [656/724] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:48.416 [657/724] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:01:48.416 [658/724] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:01:48.675 [659/724] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:01:48.675 [660/724] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:01:48.675 [661/724] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:01:48.675 [662/724] Linking target lib/librte_bitratestats.so.25.0 00:01:48.675 [663/724] Linking target lib/librte_latencystats.so.25.0 00:01:48.675 [664/724] Linking target lib/librte_dispatcher.so.25.0 00:01:48.675 [665/724] Linking target lib/librte_pdump.so.25.0 00:01:48.675 [666/724] Linking target lib/librte_graph.so.25.0 00:01:48.675 [667/724] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:48.675 [668/724] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:48.675 [669/724] Linking target lib/librte_port.so.25.0 00:01:48.675 [670/724] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:48.675 [671/724] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:01:48.675 [672/724] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:01:48.675 [673/724] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:48.934 [674/724] Linking target lib/librte_table.so.25.0 00:01:48.934 [675/724] Linking target lib/librte_node.so.25.0 00:01:48.934 [676/724] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:48.934 [677/724] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:01:48.934 [678/724] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:48.934 [679/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:48.934 [680/724] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:49.509 [681/724] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:49.509 [682/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:49.509 [683/724] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:49.767 [684/724] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:49.767 [685/724] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:50.026 [686/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:50.026 [687/724] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:50.026 [688/724] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:50.026 [689/724] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:50.026 [690/724] Linking static target drivers/librte_net_i40e.a 00:01:50.593 [691/724] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:50.593 [692/724] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.852 [693/724] Linking target drivers/librte_net_i40e.so.25.0 00:01:51.111 [694/724] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:51.111 [695/724] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:52.488 [696/724] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:00.603 [697/724] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:00.603 [698/724] Linking static target lib/librte_pipeline.a 00:02:00.603 [699/724] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:00.603 [700/724] Linking static target lib/librte_vhost.a 00:02:00.603 [701/724] Linking target app/dpdk-test-acl 00:02:00.603 [702/724] Linking target app/dpdk-dumpcap 00:02:00.603 [703/724] Linking target app/dpdk-test-dma-perf 00:02:00.603 [704/724] Linking target app/dpdk-pdump 00:02:00.603 [705/724] Linking target app/dpdk-test-gpudev 00:02:00.603 [706/724] Linking target app/dpdk-test-cmdline 00:02:00.603 [707/724] Linking target app/dpdk-proc-info 00:02:00.861 [708/724] Linking target app/dpdk-test-mldev 00:02:00.861 [709/724] Linking target app/dpdk-test-pipeline 00:02:00.861 [710/724] Linking target app/dpdk-test-crypto-perf 00:02:00.861 [711/724] Linking target app/dpdk-test-eventdev 00:02:00.861 [712/724] Linking target app/dpdk-test-sad 00:02:00.861 [713/724] Linking target app/dpdk-test-compress-perf 00:02:00.861 [714/724] Linking target app/dpdk-test-fib 00:02:00.861 [715/724] Linking target app/dpdk-graph 00:02:00.861 [716/724] Linking target app/dpdk-test-flow-perf 00:02:00.861 [717/724] Linking target app/dpdk-test-security-perf 00:02:00.861 [718/724] Linking target app/dpdk-test-regex 00:02:00.861 [719/724] Linking target app/dpdk-test-bbdev 00:02:00.861 [720/724] Linking target app/dpdk-testpmd 00:02:01.428 [721/724] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.428 [722/724] Linking target lib/librte_vhost.so.25.0 00:02:01.997 [723/724] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.256 [724/724] Linking target lib/librte_pipeline.so.25.0 00:02:02.256 04:37:52 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:02.256 04:37:52 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:02.256 04:37:52 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:02.256 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:02.256 [0/1] Installing files. 00:02:02.516 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:02.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:02.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:02.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:02.516 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:02.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:02.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:02.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:02.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:02.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:02.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:02.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:02.516 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_eddsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.517 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.518 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:02.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:02.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:02.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:02.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:02.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:02.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:02.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:02.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:02.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:02.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:02.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:02.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:02.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:02.783 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_log.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_kvargs.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_argparse.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_telemetry.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_eal.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_rcu.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_mempool.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_mbuf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_net.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_meter.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_ethdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_cmdline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_metrics.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_hash.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_timer.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_acl.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_bbdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_bitratestats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_bpf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_cfgfile.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_compressdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_cryptodev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_distributor.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_dmadev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_efd.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_eventdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_dispatcher.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_gpudev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_gro.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_gso.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_ip_frag.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_jobstats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.783 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_latencystats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_lpm.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_member.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_pcapng.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_power.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_rawdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_regexdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_mldev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_rib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_reorder.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_sched.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_security.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_stack.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_vhost.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_ipsec.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_pdcp.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_fib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_port.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_pdump.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_table.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_pipeline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_graph.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_node.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing drivers/librte_bus_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:02:03.355 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing drivers/librte_bus_vdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:02:03.355 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing drivers/librte_mempool_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:02:03.355 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing drivers/librte_net_i40e.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:02:03.355 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitset.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_cksum.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip4.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:03.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:03.359 Installing symlink pointing to librte_log.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.25 00:02:03.359 Installing symlink pointing to librte_log.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:03.359 Installing symlink pointing to librte_kvargs.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.25 00:02:03.359 Installing symlink pointing to librte_kvargs.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:03.359 Installing symlink pointing to librte_argparse.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.25 00:02:03.359 Installing symlink pointing to librte_argparse.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:02:03.359 Installing symlink pointing to librte_telemetry.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.25 00:02:03.359 Installing symlink pointing to librte_telemetry.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:03.359 Installing symlink pointing to librte_eal.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.25 00:02:03.359 Installing symlink pointing to librte_eal.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:03.359 Installing symlink pointing to librte_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.25 00:02:03.359 Installing symlink pointing to librte_ring.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:03.360 Installing symlink pointing to librte_rcu.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.25 00:02:03.360 Installing symlink pointing to librte_rcu.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:03.360 Installing symlink pointing to librte_mempool.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.25 00:02:03.360 Installing symlink pointing to librte_mempool.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:03.360 Installing symlink pointing to librte_mbuf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.25 00:02:03.360 Installing symlink pointing to librte_mbuf.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:03.360 Installing symlink pointing to librte_net.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.25 00:02:03.360 Installing symlink pointing to librte_net.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:03.360 Installing symlink pointing to librte_meter.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.25 00:02:03.360 Installing symlink pointing to librte_meter.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:03.360 Installing symlink pointing to librte_ethdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.25 00:02:03.360 Installing symlink pointing to librte_ethdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:03.360 Installing symlink pointing to librte_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.25 00:02:03.360 Installing symlink pointing to librte_pci.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:03.360 Installing symlink pointing to librte_cmdline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.25 00:02:03.360 Installing symlink pointing to librte_cmdline.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:03.360 Installing symlink pointing to librte_metrics.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.25 00:02:03.360 Installing symlink pointing to librte_metrics.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:03.360 Installing symlink pointing to librte_hash.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.25 00:02:03.360 Installing symlink pointing to librte_hash.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:03.360 Installing symlink pointing to librte_timer.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.25 00:02:03.360 Installing symlink pointing to librte_timer.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:03.360 Installing symlink pointing to librte_acl.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.25 00:02:03.360 Installing symlink pointing to librte_acl.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:03.360 Installing symlink pointing to librte_bbdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.25 00:02:03.360 Installing symlink pointing to librte_bbdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:03.360 Installing symlink pointing to librte_bitratestats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.25 00:02:03.360 Installing symlink pointing to librte_bitratestats.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:03.360 Installing symlink pointing to librte_bpf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.25 00:02:03.360 Installing symlink pointing to librte_bpf.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:03.360 Installing symlink pointing to librte_cfgfile.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.25 00:02:03.360 Installing symlink pointing to librte_cfgfile.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:03.360 Installing symlink pointing to librte_compressdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.25 00:02:03.360 Installing symlink pointing to librte_compressdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:03.360 Installing symlink pointing to librte_cryptodev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.25 00:02:03.360 Installing symlink pointing to librte_cryptodev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:03.360 Installing symlink pointing to librte_distributor.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.25 00:02:03.360 Installing symlink pointing to librte_distributor.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:03.360 Installing symlink pointing to librte_dmadev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.25 00:02:03.360 Installing symlink pointing to librte_dmadev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:03.360 Installing symlink pointing to librte_efd.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.25 00:02:03.360 Installing symlink pointing to librte_efd.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:03.360 Installing symlink pointing to librte_eventdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.25 00:02:03.360 Installing symlink pointing to librte_eventdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:03.360 Installing symlink pointing to librte_dispatcher.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.25 00:02:03.360 Installing symlink pointing to librte_dispatcher.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:03.360 Installing symlink pointing to librte_gpudev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.25 00:02:03.360 Installing symlink pointing to librte_gpudev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:03.360 Installing symlink pointing to librte_gro.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.25 00:02:03.360 Installing symlink pointing to librte_gro.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:03.360 Installing symlink pointing to librte_gso.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.25 00:02:03.360 Installing symlink pointing to librte_gso.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:03.360 Installing symlink pointing to librte_ip_frag.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.25 00:02:03.360 Installing symlink pointing to librte_ip_frag.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:03.360 Installing symlink pointing to librte_jobstats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.25 00:02:03.360 Installing symlink pointing to librte_jobstats.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:03.360 Installing symlink pointing to librte_latencystats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.25 00:02:03.360 Installing symlink pointing to librte_latencystats.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:03.360 Installing symlink pointing to librte_lpm.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.25 00:02:03.360 Installing symlink pointing to librte_lpm.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:03.360 Installing symlink pointing to librte_member.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.25 00:02:03.360 Installing symlink pointing to librte_member.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:03.360 Installing symlink pointing to librte_pcapng.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.25 00:02:03.360 Installing symlink pointing to librte_pcapng.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:03.360 Installing symlink pointing to librte_power.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.25 00:02:03.360 Installing symlink pointing to librte_power.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:03.360 Installing symlink pointing to librte_rawdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.25 00:02:03.360 Installing symlink pointing to librte_rawdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:03.360 Installing symlink pointing to librte_regexdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.25 00:02:03.360 Installing symlink pointing to librte_regexdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:03.360 Installing symlink pointing to librte_mldev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.25 00:02:03.360 Installing symlink pointing to librte_mldev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:03.360 Installing symlink pointing to librte_rib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.25 00:02:03.360 Installing symlink pointing to librte_rib.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:03.360 Installing symlink pointing to librte_reorder.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.25 00:02:03.360 Installing symlink pointing to librte_reorder.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:03.360 Installing symlink pointing to librte_sched.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.25 00:02:03.360 Installing symlink pointing to librte_sched.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:03.360 Installing symlink pointing to librte_security.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.25 00:02:03.360 Installing symlink pointing to librte_security.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:03.360 Installing symlink pointing to librte_stack.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.25 00:02:03.360 Installing symlink pointing to librte_stack.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:03.360 Installing symlink pointing to librte_vhost.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.25 00:02:03.360 Installing symlink pointing to librte_vhost.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:03.360 Installing symlink pointing to librte_ipsec.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.25 00:02:03.360 Installing symlink pointing to librte_ipsec.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:03.360 Installing symlink pointing to librte_pdcp.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.25 00:02:03.360 Installing symlink pointing to librte_pdcp.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:03.360 Installing symlink pointing to librte_fib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.25 00:02:03.360 Installing symlink pointing to librte_fib.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:03.360 Installing symlink pointing to librte_port.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.25 00:02:03.360 Installing symlink pointing to librte_port.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:03.360 Installing symlink pointing to librte_pdump.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.25 00:02:03.360 Installing symlink pointing to librte_pdump.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:03.360 Installing symlink pointing to librte_table.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.25 00:02:03.360 Installing symlink pointing to librte_table.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:03.360 Installing symlink pointing to librte_pipeline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.25 00:02:03.360 Installing symlink pointing to librte_pipeline.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:03.361 Installing symlink pointing to librte_graph.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.25 00:02:03.361 Installing symlink pointing to librte_graph.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:03.361 Installing symlink pointing to librte_node.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.25 00:02:03.361 Installing symlink pointing to librte_node.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:03.361 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:02:03.361 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:02:03.361 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:02:03.361 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:02:03.361 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:02:03.361 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:02:03.361 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:02:03.361 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:02:03.361 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:02:03.361 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:02:03.361 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:02:03.361 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:02:03.361 Installing symlink pointing to librte_bus_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:02:03.361 Installing symlink pointing to librte_bus_pci.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:02:03.361 Installing symlink pointing to librte_bus_vdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:02:03.361 Installing symlink pointing to librte_bus_vdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:02:03.361 Installing symlink pointing to librte_mempool_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:02:03.361 Installing symlink pointing to librte_mempool_ring.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:02:03.361 Installing symlink pointing to librte_net_i40e.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:02:03.361 Installing symlink pointing to librte_net_i40e.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:02:03.361 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:02:03.361 04:37:53 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:03.361 04:37:53 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:03.361 00:02:03.361 real 0m40.693s 00:02:03.361 user 14m9.460s 00:02:03.361 sys 2m4.041s 00:02:03.361 04:37:53 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:03.361 04:37:53 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:03.361 ************************************ 00:02:03.361 END TEST build_native_dpdk 00:02:03.361 ************************************ 00:02:03.361 04:37:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:03.361 04:37:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:03.361 04:37:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:03.361 04:37:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:03.361 04:37:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:03.361 04:37:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:03.361 04:37:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:03.361 04:37:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:03.361 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:03.620 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.620 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.620 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:03.879 Using 'verbs' RDMA provider 00:02:14.419 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:24.400 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:24.400 Creating mk/config.mk...done. 00:02:24.400 Creating mk/cc.flags.mk...done. 00:02:24.400 Type 'make' to build. 00:02:24.400 04:38:13 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:02:24.400 04:38:13 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:24.400 04:38:13 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:24.400 04:38:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.400 ************************************ 00:02:24.400 START TEST make 00:02:24.400 ************************************ 00:02:24.400 04:38:13 make -- common/autotest_common.sh@1125 -- $ make -j48 00:02:24.400 make[1]: Nothing to be done for 'all'. 00:02:25.355 The Meson build system 00:02:25.355 Version: 1.5.0 00:02:25.355 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:25.355 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:25.355 Build type: native build 00:02:25.355 Project name: libvfio-user 00:02:25.355 Project version: 0.0.1 00:02:25.355 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:25.355 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:25.355 Host machine cpu family: x86_64 00:02:25.355 Host machine cpu: x86_64 00:02:25.355 Run-time dependency threads found: YES 00:02:25.355 Library dl found: YES 00:02:25.355 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:25.355 Run-time dependency json-c found: YES 0.17 00:02:25.355 Run-time dependency cmocka found: YES 1.1.7 00:02:25.355 Program pytest-3 found: NO 00:02:25.355 Program flake8 found: NO 00:02:25.355 Program misspell-fixer found: NO 00:02:25.355 Program restructuredtext-lint found: NO 00:02:25.355 Program valgrind found: YES (/usr/bin/valgrind) 00:02:25.355 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:25.355 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:25.355 Compiler for C supports arguments -Wwrite-strings: YES 00:02:25.355 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:25.355 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:25.355 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:25.355 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:25.355 Build targets in project: 8 00:02:25.355 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:25.355 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:25.355 00:02:25.355 libvfio-user 0.0.1 00:02:25.355 00:02:25.355 User defined options 00:02:25.355 buildtype : debug 00:02:25.355 default_library: shared 00:02:25.355 libdir : /usr/local/lib 00:02:25.355 00:02:25.355 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:26.297 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:26.560 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:26.560 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:26.560 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:26.560 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:26.560 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:26.560 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:26.560 [7/37] Compiling C object samples/null.p/null.c.o 00:02:26.560 [8/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:26.560 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:26.560 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:26.560 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:26.560 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:26.560 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:26.823 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:26.823 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:26.823 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:26.823 [17/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:26.823 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:26.823 [19/37] Compiling C object samples/server.p/server.c.o 00:02:26.823 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:26.823 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:26.823 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:26.823 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:26.823 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:26.823 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:26.823 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:26.823 [27/37] Compiling C object samples/client.p/client.c.o 00:02:26.823 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:02:26.823 [29/37] Linking target samples/client 00:02:26.823 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:27.086 [31/37] Linking target test/unit_tests 00:02:27.086 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:27.086 [33/37] Linking target samples/server 00:02:27.086 [34/37] Linking target samples/lspci 00:02:27.086 [35/37] Linking target samples/null 00:02:27.086 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:27.086 [37/37] Linking target samples/gpio-pci-idio-16 00:02:27.086 INFO: autodetecting backend as ninja 00:02:27.086 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:27.347 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:28.370 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:28.370 ninja: no work to do. 00:03:07.089 CC lib/log/log.o 00:03:07.089 CC lib/log/log_flags.o 00:03:07.089 CC lib/log/log_deprecated.o 00:03:07.089 CC lib/ut/ut.o 00:03:07.089 CC lib/ut_mock/mock.o 00:03:07.089 LIB libspdk_ut.a 00:03:07.089 LIB libspdk_ut_mock.a 00:03:07.089 LIB libspdk_log.a 00:03:07.089 SO libspdk_ut.so.2.0 00:03:07.089 SO libspdk_ut_mock.so.6.0 00:03:07.089 SO libspdk_log.so.7.1 00:03:07.089 SYMLINK libspdk_ut.so 00:03:07.089 SYMLINK libspdk_ut_mock.so 00:03:07.089 SYMLINK libspdk_log.so 00:03:07.089 CC lib/dma/dma.o 00:03:07.089 CC lib/ioat/ioat.o 00:03:07.089 CXX lib/trace_parser/trace.o 00:03:07.089 CC lib/util/base64.o 00:03:07.089 CC lib/util/bit_array.o 00:03:07.089 CC lib/util/cpuset.o 00:03:07.089 CC lib/util/crc16.o 00:03:07.089 CC lib/util/crc32.o 00:03:07.089 CC lib/util/crc32c.o 00:03:07.089 CC lib/util/crc32_ieee.o 00:03:07.089 CC lib/util/crc64.o 00:03:07.089 CC lib/util/dif.o 00:03:07.089 CC lib/util/fd.o 00:03:07.089 CC lib/util/fd_group.o 00:03:07.089 CC lib/util/file.o 00:03:07.089 CC lib/util/hexlify.o 00:03:07.089 CC lib/util/iov.o 00:03:07.089 CC lib/util/math.o 00:03:07.089 CC lib/util/net.o 00:03:07.089 CC lib/util/pipe.o 00:03:07.089 CC lib/util/strerror_tls.o 00:03:07.089 CC lib/util/uuid.o 00:03:07.089 CC lib/util/string.o 00:03:07.089 CC lib/util/xor.o 00:03:07.089 CC lib/util/zipf.o 00:03:07.089 CC lib/util/md5.o 00:03:07.089 CC lib/vfio_user/host/vfio_user_pci.o 00:03:07.089 CC lib/vfio_user/host/vfio_user.o 00:03:07.089 LIB libspdk_ioat.a 00:03:07.089 LIB libspdk_dma.a 00:03:07.089 SO libspdk_ioat.so.7.0 00:03:07.089 SO libspdk_dma.so.5.0 00:03:07.089 SYMLINK libspdk_ioat.so 00:03:07.089 SYMLINK libspdk_dma.so 00:03:07.089 LIB libspdk_vfio_user.a 00:03:07.089 SO libspdk_vfio_user.so.5.0 00:03:07.089 SYMLINK libspdk_vfio_user.so 00:03:07.089 LIB libspdk_util.a 00:03:07.089 SO libspdk_util.so.10.0 00:03:07.089 SYMLINK libspdk_util.so 00:03:07.089 LIB libspdk_trace_parser.a 00:03:07.089 SO libspdk_trace_parser.so.6.0 00:03:07.089 CC lib/rdma_provider/common.o 00:03:07.089 CC lib/rdma_utils/rdma_utils.o 00:03:07.089 CC lib/conf/conf.o 00:03:07.089 CC lib/idxd/idxd.o 00:03:07.089 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:07.089 CC lib/env_dpdk/env.o 00:03:07.089 CC lib/vmd/vmd.o 00:03:07.089 CC lib/json/json_parse.o 00:03:07.089 CC lib/idxd/idxd_user.o 00:03:07.089 CC lib/json/json_util.o 00:03:07.089 CC lib/vmd/led.o 00:03:07.089 CC lib/env_dpdk/memory.o 00:03:07.089 CC lib/json/json_write.o 00:03:07.089 CC lib/idxd/idxd_kernel.o 00:03:07.089 CC lib/env_dpdk/pci.o 00:03:07.089 CC lib/env_dpdk/init.o 00:03:07.089 CC lib/env_dpdk/threads.o 00:03:07.089 CC lib/env_dpdk/pci_ioat.o 00:03:07.089 CC lib/env_dpdk/pci_virtio.o 00:03:07.089 CC lib/env_dpdk/pci_vmd.o 00:03:07.089 CC lib/env_dpdk/pci_idxd.o 00:03:07.089 CC lib/env_dpdk/pci_event.o 00:03:07.089 CC lib/env_dpdk/sigbus_handler.o 00:03:07.089 CC lib/env_dpdk/pci_dpdk.o 00:03:07.089 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:07.089 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:07.089 SYMLINK libspdk_trace_parser.so 00:03:07.089 LIB libspdk_rdma_provider.a 00:03:07.089 SO libspdk_rdma_provider.so.6.0 00:03:07.089 LIB libspdk_conf.a 00:03:07.089 SO libspdk_conf.so.6.0 00:03:07.089 SYMLINK libspdk_rdma_provider.so 00:03:07.089 LIB libspdk_rdma_utils.a 00:03:07.089 SYMLINK libspdk_conf.so 00:03:07.089 SO libspdk_rdma_utils.so.1.0 00:03:07.089 LIB libspdk_json.a 00:03:07.089 SO libspdk_json.so.6.0 00:03:07.089 SYMLINK libspdk_rdma_utils.so 00:03:07.089 SYMLINK libspdk_json.so 00:03:07.089 CC lib/jsonrpc/jsonrpc_server.o 00:03:07.089 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:07.089 CC lib/jsonrpc/jsonrpc_client.o 00:03:07.089 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:07.089 LIB libspdk_idxd.a 00:03:07.089 SO libspdk_idxd.so.12.1 00:03:07.089 LIB libspdk_vmd.a 00:03:07.089 SO libspdk_vmd.so.6.0 00:03:07.089 SYMLINK libspdk_idxd.so 00:03:07.089 SYMLINK libspdk_vmd.so 00:03:07.089 LIB libspdk_jsonrpc.a 00:03:07.089 SO libspdk_jsonrpc.so.6.0 00:03:07.089 SYMLINK libspdk_jsonrpc.so 00:03:07.348 CC lib/rpc/rpc.o 00:03:07.606 LIB libspdk_rpc.a 00:03:07.606 SO libspdk_rpc.so.6.0 00:03:07.606 SYMLINK libspdk_rpc.so 00:03:07.865 LIB libspdk_env_dpdk.a 00:03:07.865 CC lib/notify/notify.o 00:03:07.865 CC lib/trace/trace.o 00:03:07.865 CC lib/notify/notify_rpc.o 00:03:07.865 CC lib/trace/trace_flags.o 00:03:07.865 CC lib/trace/trace_rpc.o 00:03:07.865 CC lib/keyring/keyring.o 00:03:07.865 CC lib/keyring/keyring_rpc.o 00:03:07.865 SO libspdk_env_dpdk.so.15.1 00:03:07.865 SYMLINK libspdk_env_dpdk.so 00:03:08.123 LIB libspdk_notify.a 00:03:08.123 SO libspdk_notify.so.6.0 00:03:08.123 SYMLINK libspdk_notify.so 00:03:08.123 LIB libspdk_keyring.a 00:03:08.123 LIB libspdk_trace.a 00:03:08.123 SO libspdk_keyring.so.2.0 00:03:08.123 SO libspdk_trace.so.11.0 00:03:08.123 SYMLINK libspdk_keyring.so 00:03:08.123 SYMLINK libspdk_trace.so 00:03:08.381 CC lib/thread/thread.o 00:03:08.381 CC lib/thread/iobuf.o 00:03:08.381 CC lib/sock/sock.o 00:03:08.381 CC lib/sock/sock_rpc.o 00:03:08.947 LIB libspdk_sock.a 00:03:08.947 SO libspdk_sock.so.10.0 00:03:08.947 SYMLINK libspdk_sock.so 00:03:08.947 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:08.947 CC lib/nvme/nvme_ctrlr.o 00:03:08.947 CC lib/nvme/nvme_fabric.o 00:03:08.947 CC lib/nvme/nvme_ns_cmd.o 00:03:08.947 CC lib/nvme/nvme_ns.o 00:03:08.947 CC lib/nvme/nvme_pcie_common.o 00:03:08.947 CC lib/nvme/nvme_pcie.o 00:03:08.947 CC lib/nvme/nvme_qpair.o 00:03:08.947 CC lib/nvme/nvme.o 00:03:08.947 CC lib/nvme/nvme_quirks.o 00:03:08.947 CC lib/nvme/nvme_transport.o 00:03:08.947 CC lib/nvme/nvme_discovery.o 00:03:08.947 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:08.947 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:08.947 CC lib/nvme/nvme_tcp.o 00:03:08.947 CC lib/nvme/nvme_opal.o 00:03:08.947 CC lib/nvme/nvme_io_msg.o 00:03:08.947 CC lib/nvme/nvme_poll_group.o 00:03:08.947 CC lib/nvme/nvme_zns.o 00:03:08.947 CC lib/nvme/nvme_stubs.o 00:03:08.947 CC lib/nvme/nvme_auth.o 00:03:08.947 CC lib/nvme/nvme_cuse.o 00:03:08.947 CC lib/nvme/nvme_vfio_user.o 00:03:08.947 CC lib/nvme/nvme_rdma.o 00:03:10.319 LIB libspdk_thread.a 00:03:10.319 SO libspdk_thread.so.11.0 00:03:10.319 SYMLINK libspdk_thread.so 00:03:10.319 CC lib/blob/blobstore.o 00:03:10.319 CC lib/virtio/virtio.o 00:03:10.319 CC lib/init/json_config.o 00:03:10.319 CC lib/fsdev/fsdev.o 00:03:10.319 CC lib/blob/request.o 00:03:10.319 CC lib/vfu_tgt/tgt_endpoint.o 00:03:10.319 CC lib/virtio/virtio_vhost_user.o 00:03:10.319 CC lib/accel/accel.o 00:03:10.319 CC lib/fsdev/fsdev_io.o 00:03:10.319 CC lib/blob/zeroes.o 00:03:10.319 CC lib/vfu_tgt/tgt_rpc.o 00:03:10.319 CC lib/virtio/virtio_vfio_user.o 00:03:10.319 CC lib/init/subsystem.o 00:03:10.319 CC lib/accel/accel_rpc.o 00:03:10.319 CC lib/fsdev/fsdev_rpc.o 00:03:10.319 CC lib/blob/blob_bs_dev.o 00:03:10.319 CC lib/accel/accel_sw.o 00:03:10.319 CC lib/virtio/virtio_pci.o 00:03:10.319 CC lib/init/rpc.o 00:03:10.319 CC lib/init/subsystem_rpc.o 00:03:10.577 LIB libspdk_init.a 00:03:10.577 SO libspdk_init.so.6.0 00:03:10.577 LIB libspdk_virtio.a 00:03:10.577 SYMLINK libspdk_init.so 00:03:10.577 LIB libspdk_vfu_tgt.a 00:03:10.577 SO libspdk_vfu_tgt.so.3.0 00:03:10.834 SO libspdk_virtio.so.7.0 00:03:10.834 SYMLINK libspdk_vfu_tgt.so 00:03:10.834 SYMLINK libspdk_virtio.so 00:03:10.834 CC lib/event/app.o 00:03:10.834 CC lib/event/reactor.o 00:03:10.834 CC lib/event/log_rpc.o 00:03:10.834 CC lib/event/app_rpc.o 00:03:10.834 CC lib/event/scheduler_static.o 00:03:11.092 LIB libspdk_fsdev.a 00:03:11.092 SO libspdk_fsdev.so.2.0 00:03:11.092 SYMLINK libspdk_fsdev.so 00:03:11.350 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:11.350 LIB libspdk_event.a 00:03:11.350 SO libspdk_event.so.14.0 00:03:11.350 SYMLINK libspdk_event.so 00:03:11.608 LIB libspdk_accel.a 00:03:11.608 SO libspdk_accel.so.16.0 00:03:11.608 LIB libspdk_nvme.a 00:03:11.608 SYMLINK libspdk_accel.so 00:03:11.608 SO libspdk_nvme.so.14.1 00:03:11.866 CC lib/bdev/bdev.o 00:03:11.866 CC lib/bdev/bdev_rpc.o 00:03:11.866 CC lib/bdev/bdev_zone.o 00:03:11.866 CC lib/bdev/part.o 00:03:11.866 CC lib/bdev/scsi_nvme.o 00:03:11.866 LIB libspdk_fuse_dispatcher.a 00:03:11.866 SYMLINK libspdk_nvme.so 00:03:11.866 SO libspdk_fuse_dispatcher.so.1.0 00:03:12.125 SYMLINK libspdk_fuse_dispatcher.so 00:03:13.499 LIB libspdk_blob.a 00:03:13.499 SO libspdk_blob.so.11.0 00:03:13.499 SYMLINK libspdk_blob.so 00:03:13.757 CC lib/lvol/lvol.o 00:03:13.757 CC lib/blobfs/blobfs.o 00:03:13.757 CC lib/blobfs/tree.o 00:03:14.322 LIB libspdk_bdev.a 00:03:14.322 SO libspdk_bdev.so.17.0 00:03:14.583 SYMLINK libspdk_bdev.so 00:03:14.583 LIB libspdk_blobfs.a 00:03:14.583 SO libspdk_blobfs.so.10.0 00:03:14.583 SYMLINK libspdk_blobfs.so 00:03:14.583 CC lib/ublk/ublk.o 00:03:14.583 CC lib/ublk/ublk_rpc.o 00:03:14.583 CC lib/nvmf/ctrlr.o 00:03:14.583 CC lib/nbd/nbd.o 00:03:14.583 CC lib/nvmf/ctrlr_discovery.o 00:03:14.583 CC lib/nbd/nbd_rpc.o 00:03:14.583 CC lib/scsi/dev.o 00:03:14.583 CC lib/ftl/ftl_core.o 00:03:14.583 CC lib/ftl/ftl_init.o 00:03:14.583 CC lib/ftl/ftl_layout.o 00:03:14.583 CC lib/scsi/lun.o 00:03:14.583 CC lib/nvmf/ctrlr_bdev.o 00:03:14.583 CC lib/ftl/ftl_debug.o 00:03:14.583 CC lib/scsi/port.o 00:03:14.583 CC lib/nvmf/subsystem.o 00:03:14.583 CC lib/scsi/scsi.o 00:03:14.583 CC lib/nvmf/nvmf.o 00:03:14.583 CC lib/ftl/ftl_io.o 00:03:14.583 CC lib/scsi/scsi_bdev.o 00:03:14.583 CC lib/nvmf/nvmf_rpc.o 00:03:14.583 CC lib/ftl/ftl_sb.o 00:03:14.583 CC lib/nvmf/transport.o 00:03:14.583 CC lib/scsi/scsi_pr.o 00:03:14.583 CC lib/ftl/ftl_l2p.o 00:03:14.583 CC lib/scsi/scsi_rpc.o 00:03:14.583 CC lib/nvmf/tcp.o 00:03:14.583 CC lib/ftl/ftl_l2p_flat.o 00:03:14.583 CC lib/scsi/task.o 00:03:14.583 CC lib/nvmf/stubs.o 00:03:14.583 CC lib/ftl/ftl_nv_cache.o 00:03:14.583 CC lib/nvmf/mdns_server.o 00:03:14.583 CC lib/ftl/ftl_band.o 00:03:14.583 CC lib/nvmf/vfio_user.o 00:03:14.583 CC lib/ftl/ftl_band_ops.o 00:03:14.583 CC lib/nvmf/rdma.o 00:03:14.583 CC lib/ftl/ftl_writer.o 00:03:14.583 CC lib/nvmf/auth.o 00:03:14.583 CC lib/ftl/ftl_rq.o 00:03:14.583 CC lib/ftl/ftl_reloc.o 00:03:14.583 CC lib/ftl/ftl_l2p_cache.o 00:03:14.583 CC lib/ftl/ftl_p2l.o 00:03:14.583 CC lib/ftl/ftl_p2l_log.o 00:03:14.583 LIB libspdk_lvol.a 00:03:14.583 CC lib/ftl/mngt/ftl_mngt.o 00:03:14.583 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:14.583 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:14.583 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:14.583 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:14.851 SO libspdk_lvol.so.10.0 00:03:14.851 SYMLINK libspdk_lvol.so 00:03:14.851 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:15.113 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:15.113 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:15.113 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:15.113 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:15.113 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:15.113 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:15.113 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:15.113 CC lib/ftl/utils/ftl_conf.o 00:03:15.113 CC lib/ftl/utils/ftl_md.o 00:03:15.113 CC lib/ftl/utils/ftl_mempool.o 00:03:15.113 CC lib/ftl/utils/ftl_bitmap.o 00:03:15.113 CC lib/ftl/utils/ftl_property.o 00:03:15.113 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:15.113 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:15.113 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:15.113 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:15.374 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:15.374 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:15.374 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:15.374 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:15.374 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:15.374 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:15.374 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:15.374 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:15.374 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:15.374 CC lib/ftl/base/ftl_base_dev.o 00:03:15.374 CC lib/ftl/base/ftl_base_bdev.o 00:03:15.374 CC lib/ftl/ftl_trace.o 00:03:15.633 LIB libspdk_nbd.a 00:03:15.633 SO libspdk_nbd.so.7.0 00:03:15.633 SYMLINK libspdk_nbd.so 00:03:15.633 LIB libspdk_scsi.a 00:03:15.633 SO libspdk_scsi.so.9.0 00:03:15.891 SYMLINK libspdk_scsi.so 00:03:15.891 LIB libspdk_ublk.a 00:03:15.891 SO libspdk_ublk.so.3.0 00:03:15.891 SYMLINK libspdk_ublk.so 00:03:15.891 CC lib/vhost/vhost.o 00:03:15.891 CC lib/iscsi/conn.o 00:03:15.891 CC lib/vhost/vhost_rpc.o 00:03:15.891 CC lib/iscsi/init_grp.o 00:03:15.891 CC lib/vhost/vhost_scsi.o 00:03:15.891 CC lib/iscsi/iscsi.o 00:03:15.891 CC lib/vhost/vhost_blk.o 00:03:15.891 CC lib/iscsi/param.o 00:03:15.891 CC lib/vhost/rte_vhost_user.o 00:03:15.891 CC lib/iscsi/portal_grp.o 00:03:15.891 CC lib/iscsi/tgt_node.o 00:03:15.891 CC lib/iscsi/iscsi_subsystem.o 00:03:15.891 CC lib/iscsi/iscsi_rpc.o 00:03:15.891 CC lib/iscsi/task.o 00:03:16.150 LIB libspdk_ftl.a 00:03:16.408 SO libspdk_ftl.so.9.0 00:03:16.666 SYMLINK libspdk_ftl.so 00:03:17.232 LIB libspdk_vhost.a 00:03:17.232 SO libspdk_vhost.so.8.0 00:03:17.489 SYMLINK libspdk_vhost.so 00:03:17.489 LIB libspdk_nvmf.a 00:03:17.489 LIB libspdk_iscsi.a 00:03:17.489 SO libspdk_nvmf.so.20.0 00:03:17.489 SO libspdk_iscsi.so.8.0 00:03:17.747 SYMLINK libspdk_iscsi.so 00:03:17.747 SYMLINK libspdk_nvmf.so 00:03:18.004 CC module/env_dpdk/env_dpdk_rpc.o 00:03:18.004 CC module/vfu_device/vfu_virtio.o 00:03:18.004 CC module/vfu_device/vfu_virtio_blk.o 00:03:18.004 CC module/vfu_device/vfu_virtio_scsi.o 00:03:18.004 CC module/vfu_device/vfu_virtio_rpc.o 00:03:18.004 CC module/vfu_device/vfu_virtio_fs.o 00:03:18.004 CC module/accel/dsa/accel_dsa.o 00:03:18.004 CC module/fsdev/aio/fsdev_aio.o 00:03:18.004 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:18.004 CC module/accel/dsa/accel_dsa_rpc.o 00:03:18.004 CC module/fsdev/aio/linux_aio_mgr.o 00:03:18.004 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:18.004 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:18.004 CC module/accel/error/accel_error.o 00:03:18.004 CC module/scheduler/gscheduler/gscheduler.o 00:03:18.004 CC module/keyring/linux/keyring.o 00:03:18.004 CC module/accel/error/accel_error_rpc.o 00:03:18.004 CC module/accel/ioat/accel_ioat.o 00:03:18.004 CC module/keyring/linux/keyring_rpc.o 00:03:18.004 CC module/keyring/file/keyring.o 00:03:18.004 CC module/accel/ioat/accel_ioat_rpc.o 00:03:18.004 CC module/keyring/file/keyring_rpc.o 00:03:18.004 CC module/sock/posix/posix.o 00:03:18.004 CC module/accel/iaa/accel_iaa.o 00:03:18.004 CC module/blob/bdev/blob_bdev.o 00:03:18.004 CC module/accel/iaa/accel_iaa_rpc.o 00:03:18.004 LIB libspdk_env_dpdk_rpc.a 00:03:18.262 SO libspdk_env_dpdk_rpc.so.6.0 00:03:18.262 SYMLINK libspdk_env_dpdk_rpc.so 00:03:18.262 LIB libspdk_keyring_file.a 00:03:18.262 LIB libspdk_keyring_linux.a 00:03:18.262 LIB libspdk_scheduler_gscheduler.a 00:03:18.262 LIB libspdk_scheduler_dpdk_governor.a 00:03:18.262 SO libspdk_keyring_file.so.2.0 00:03:18.262 SO libspdk_keyring_linux.so.1.0 00:03:18.262 SO libspdk_scheduler_gscheduler.so.4.0 00:03:18.262 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:18.262 LIB libspdk_accel_ioat.a 00:03:18.262 LIB libspdk_accel_iaa.a 00:03:18.262 SYMLINK libspdk_keyring_file.so 00:03:18.262 SO libspdk_accel_ioat.so.6.0 00:03:18.262 SYMLINK libspdk_keyring_linux.so 00:03:18.262 SYMLINK libspdk_scheduler_gscheduler.so 00:03:18.262 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:18.262 SO libspdk_accel_iaa.so.3.0 00:03:18.262 LIB libspdk_accel_error.a 00:03:18.262 SYMLINK libspdk_accel_ioat.so 00:03:18.262 LIB libspdk_scheduler_dynamic.a 00:03:18.262 LIB libspdk_blob_bdev.a 00:03:18.262 LIB libspdk_accel_dsa.a 00:03:18.262 SO libspdk_accel_error.so.2.0 00:03:18.520 SYMLINK libspdk_accel_iaa.so 00:03:18.520 SO libspdk_scheduler_dynamic.so.4.0 00:03:18.520 SO libspdk_blob_bdev.so.11.0 00:03:18.520 SO libspdk_accel_dsa.so.5.0 00:03:18.520 SYMLINK libspdk_accel_error.so 00:03:18.520 SYMLINK libspdk_scheduler_dynamic.so 00:03:18.520 SYMLINK libspdk_blob_bdev.so 00:03:18.520 SYMLINK libspdk_accel_dsa.so 00:03:18.780 LIB libspdk_vfu_device.a 00:03:18.780 SO libspdk_vfu_device.so.3.0 00:03:18.780 CC module/bdev/error/vbdev_error.o 00:03:18.780 CC module/bdev/error/vbdev_error_rpc.o 00:03:18.780 CC module/bdev/gpt/gpt.o 00:03:18.780 CC module/bdev/lvol/vbdev_lvol.o 00:03:18.780 CC module/bdev/passthru/vbdev_passthru.o 00:03:18.780 CC module/bdev/malloc/bdev_malloc.o 00:03:18.780 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:18.780 CC module/blobfs/bdev/blobfs_bdev.o 00:03:18.780 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:18.780 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:18.780 CC module/bdev/delay/vbdev_delay.o 00:03:18.780 CC module/bdev/gpt/vbdev_gpt.o 00:03:18.780 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:18.780 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:18.780 CC module/bdev/nvme/bdev_nvme.o 00:03:18.781 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:18.781 CC module/bdev/raid/bdev_raid.o 00:03:18.781 CC module/bdev/aio/bdev_aio.o 00:03:18.781 CC module/bdev/nvme/nvme_rpc.o 00:03:18.781 CC module/bdev/raid/bdev_raid_rpc.o 00:03:18.781 CC module/bdev/nvme/bdev_mdns_client.o 00:03:18.781 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:18.781 CC module/bdev/aio/bdev_aio_rpc.o 00:03:18.781 CC module/bdev/raid/bdev_raid_sb.o 00:03:18.781 CC module/bdev/null/bdev_null.o 00:03:18.781 CC module/bdev/nvme/vbdev_opal.o 00:03:18.781 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:18.781 CC module/bdev/raid/raid0.o 00:03:18.781 CC module/bdev/null/bdev_null_rpc.o 00:03:18.781 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:18.781 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:18.781 CC module/bdev/raid/raid1.o 00:03:18.781 CC module/bdev/iscsi/bdev_iscsi.o 00:03:18.781 CC module/bdev/raid/concat.o 00:03:18.781 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:18.781 CC module/bdev/ftl/bdev_ftl.o 00:03:18.781 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:18.781 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:18.781 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:18.781 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:18.781 CC module/bdev/split/vbdev_split.o 00:03:18.781 CC module/bdev/split/vbdev_split_rpc.o 00:03:18.781 SYMLINK libspdk_vfu_device.so 00:03:19.040 LIB libspdk_fsdev_aio.a 00:03:19.040 SO libspdk_fsdev_aio.so.1.0 00:03:19.040 LIB libspdk_sock_posix.a 00:03:19.040 SO libspdk_sock_posix.so.6.0 00:03:19.040 SYMLINK libspdk_fsdev_aio.so 00:03:19.040 LIB libspdk_blobfs_bdev.a 00:03:19.040 SO libspdk_blobfs_bdev.so.6.0 00:03:19.040 SYMLINK libspdk_sock_posix.so 00:03:19.297 LIB libspdk_bdev_null.a 00:03:19.297 LIB libspdk_bdev_error.a 00:03:19.297 LIB libspdk_bdev_split.a 00:03:19.297 LIB libspdk_bdev_gpt.a 00:03:19.297 LIB libspdk_bdev_iscsi.a 00:03:19.297 SO libspdk_bdev_null.so.6.0 00:03:19.297 SO libspdk_bdev_gpt.so.6.0 00:03:19.297 SO libspdk_bdev_error.so.6.0 00:03:19.297 SO libspdk_bdev_split.so.6.0 00:03:19.297 SO libspdk_bdev_iscsi.so.6.0 00:03:19.297 LIB libspdk_bdev_ftl.a 00:03:19.297 LIB libspdk_bdev_passthru.a 00:03:19.297 SYMLINK libspdk_blobfs_bdev.so 00:03:19.297 SO libspdk_bdev_ftl.so.6.0 00:03:19.297 SO libspdk_bdev_passthru.so.6.0 00:03:19.297 SYMLINK libspdk_bdev_null.so 00:03:19.297 LIB libspdk_bdev_aio.a 00:03:19.297 LIB libspdk_bdev_zone_block.a 00:03:19.297 SYMLINK libspdk_bdev_error.so 00:03:19.297 SYMLINK libspdk_bdev_gpt.so 00:03:19.297 SYMLINK libspdk_bdev_split.so 00:03:19.297 SYMLINK libspdk_bdev_iscsi.so 00:03:19.297 SO libspdk_bdev_aio.so.6.0 00:03:19.297 SO libspdk_bdev_zone_block.so.6.0 00:03:19.297 SYMLINK libspdk_bdev_ftl.so 00:03:19.297 SYMLINK libspdk_bdev_passthru.so 00:03:19.297 LIB libspdk_bdev_malloc.a 00:03:19.297 SYMLINK libspdk_bdev_aio.so 00:03:19.297 SYMLINK libspdk_bdev_zone_block.so 00:03:19.297 LIB libspdk_bdev_delay.a 00:03:19.297 SO libspdk_bdev_malloc.so.6.0 00:03:19.298 SO libspdk_bdev_delay.so.6.0 00:03:19.556 SYMLINK libspdk_bdev_malloc.so 00:03:19.556 SYMLINK libspdk_bdev_delay.so 00:03:19.556 LIB libspdk_bdev_lvol.a 00:03:19.556 LIB libspdk_bdev_virtio.a 00:03:19.556 SO libspdk_bdev_lvol.so.6.0 00:03:19.556 SO libspdk_bdev_virtio.so.6.0 00:03:19.556 SYMLINK libspdk_bdev_lvol.so 00:03:19.556 SYMLINK libspdk_bdev_virtio.so 00:03:20.123 LIB libspdk_bdev_raid.a 00:03:20.123 SO libspdk_bdev_raid.so.6.0 00:03:20.123 SYMLINK libspdk_bdev_raid.so 00:03:21.501 LIB libspdk_bdev_nvme.a 00:03:21.501 SO libspdk_bdev_nvme.so.7.0 00:03:21.501 SYMLINK libspdk_bdev_nvme.so 00:03:22.068 CC module/event/subsystems/scheduler/scheduler.o 00:03:22.068 CC module/event/subsystems/keyring/keyring.o 00:03:22.068 CC module/event/subsystems/fsdev/fsdev.o 00:03:22.068 CC module/event/subsystems/iobuf/iobuf.o 00:03:22.068 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:22.068 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:22.068 CC module/event/subsystems/vmd/vmd.o 00:03:22.068 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:22.068 CC module/event/subsystems/sock/sock.o 00:03:22.068 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:22.068 LIB libspdk_event_keyring.a 00:03:22.068 LIB libspdk_event_vhost_blk.a 00:03:22.068 LIB libspdk_event_vfu_tgt.a 00:03:22.068 LIB libspdk_event_scheduler.a 00:03:22.068 LIB libspdk_event_vmd.a 00:03:22.068 LIB libspdk_event_fsdev.a 00:03:22.068 LIB libspdk_event_sock.a 00:03:22.068 SO libspdk_event_keyring.so.1.0 00:03:22.068 SO libspdk_event_vhost_blk.so.3.0 00:03:22.068 SO libspdk_event_vfu_tgt.so.3.0 00:03:22.068 SO libspdk_event_scheduler.so.4.0 00:03:22.068 LIB libspdk_event_iobuf.a 00:03:22.068 SO libspdk_event_fsdev.so.1.0 00:03:22.068 SO libspdk_event_sock.so.5.0 00:03:22.068 SO libspdk_event_vmd.so.6.0 00:03:22.068 SO libspdk_event_iobuf.so.3.0 00:03:22.068 SYMLINK libspdk_event_keyring.so 00:03:22.068 SYMLINK libspdk_event_vhost_blk.so 00:03:22.068 SYMLINK libspdk_event_vfu_tgt.so 00:03:22.068 SYMLINK libspdk_event_scheduler.so 00:03:22.068 SYMLINK libspdk_event_fsdev.so 00:03:22.068 SYMLINK libspdk_event_sock.so 00:03:22.068 SYMLINK libspdk_event_vmd.so 00:03:22.326 SYMLINK libspdk_event_iobuf.so 00:03:22.326 CC module/event/subsystems/accel/accel.o 00:03:22.584 LIB libspdk_event_accel.a 00:03:22.584 SO libspdk_event_accel.so.6.0 00:03:22.584 SYMLINK libspdk_event_accel.so 00:03:22.842 CC module/event/subsystems/bdev/bdev.o 00:03:22.842 LIB libspdk_event_bdev.a 00:03:23.101 SO libspdk_event_bdev.so.6.0 00:03:23.101 SYMLINK libspdk_event_bdev.so 00:03:23.101 CC module/event/subsystems/ublk/ublk.o 00:03:23.101 CC module/event/subsystems/scsi/scsi.o 00:03:23.101 CC module/event/subsystems/nbd/nbd.o 00:03:23.101 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:23.101 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:23.359 LIB libspdk_event_nbd.a 00:03:23.359 LIB libspdk_event_ublk.a 00:03:23.359 LIB libspdk_event_scsi.a 00:03:23.359 SO libspdk_event_nbd.so.6.0 00:03:23.359 SO libspdk_event_ublk.so.3.0 00:03:23.359 SO libspdk_event_scsi.so.6.0 00:03:23.359 SYMLINK libspdk_event_ublk.so 00:03:23.359 SYMLINK libspdk_event_nbd.so 00:03:23.359 SYMLINK libspdk_event_scsi.so 00:03:23.359 LIB libspdk_event_nvmf.a 00:03:23.359 SO libspdk_event_nvmf.so.6.0 00:03:23.618 SYMLINK libspdk_event_nvmf.so 00:03:23.618 CC module/event/subsystems/iscsi/iscsi.o 00:03:23.618 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:23.877 LIB libspdk_event_vhost_scsi.a 00:03:23.877 LIB libspdk_event_iscsi.a 00:03:23.877 SO libspdk_event_vhost_scsi.so.3.0 00:03:23.877 SO libspdk_event_iscsi.so.6.0 00:03:23.877 SYMLINK libspdk_event_vhost_scsi.so 00:03:23.877 SYMLINK libspdk_event_iscsi.so 00:03:23.877 SO libspdk.so.6.0 00:03:23.877 SYMLINK libspdk.so 00:03:24.140 CXX app/trace/trace.o 00:03:24.140 CC test/rpc_client/rpc_client_test.o 00:03:24.140 TEST_HEADER include/spdk/accel.h 00:03:24.140 CC app/spdk_nvme_perf/perf.o 00:03:24.140 TEST_HEADER include/spdk/accel_module.h 00:03:24.140 TEST_HEADER include/spdk/assert.h 00:03:24.140 TEST_HEADER include/spdk/barrier.h 00:03:24.140 TEST_HEADER include/spdk/base64.h 00:03:24.140 CC app/spdk_nvme_identify/identify.o 00:03:24.140 CC app/spdk_lspci/spdk_lspci.o 00:03:24.140 CC app/spdk_top/spdk_top.o 00:03:24.140 TEST_HEADER include/spdk/bdev.h 00:03:24.140 TEST_HEADER include/spdk/bdev_module.h 00:03:24.140 TEST_HEADER include/spdk/bit_array.h 00:03:24.140 TEST_HEADER include/spdk/bdev_zone.h 00:03:24.140 TEST_HEADER include/spdk/bit_pool.h 00:03:24.140 TEST_HEADER include/spdk/blob_bdev.h 00:03:24.140 CC app/spdk_nvme_discover/discovery_aer.o 00:03:24.140 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:24.140 TEST_HEADER include/spdk/blobfs.h 00:03:24.140 CC app/trace_record/trace_record.o 00:03:24.140 TEST_HEADER include/spdk/blob.h 00:03:24.140 TEST_HEADER include/spdk/conf.h 00:03:24.140 TEST_HEADER include/spdk/config.h 00:03:24.140 TEST_HEADER include/spdk/cpuset.h 00:03:24.140 TEST_HEADER include/spdk/crc16.h 00:03:24.140 TEST_HEADER include/spdk/crc32.h 00:03:24.140 TEST_HEADER include/spdk/crc64.h 00:03:24.140 TEST_HEADER include/spdk/dif.h 00:03:24.140 TEST_HEADER include/spdk/dma.h 00:03:24.140 TEST_HEADER include/spdk/endian.h 00:03:24.140 TEST_HEADER include/spdk/env_dpdk.h 00:03:24.140 TEST_HEADER include/spdk/env.h 00:03:24.140 TEST_HEADER include/spdk/fd_group.h 00:03:24.140 TEST_HEADER include/spdk/event.h 00:03:24.140 TEST_HEADER include/spdk/fd.h 00:03:24.140 TEST_HEADER include/spdk/file.h 00:03:24.140 TEST_HEADER include/spdk/fsdev.h 00:03:24.140 TEST_HEADER include/spdk/fsdev_module.h 00:03:24.140 TEST_HEADER include/spdk/ftl.h 00:03:24.140 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:24.140 TEST_HEADER include/spdk/hexlify.h 00:03:24.140 TEST_HEADER include/spdk/gpt_spec.h 00:03:24.140 TEST_HEADER include/spdk/histogram_data.h 00:03:24.140 TEST_HEADER include/spdk/idxd_spec.h 00:03:24.140 TEST_HEADER include/spdk/idxd.h 00:03:24.140 TEST_HEADER include/spdk/init.h 00:03:24.140 TEST_HEADER include/spdk/ioat.h 00:03:24.140 TEST_HEADER include/spdk/iscsi_spec.h 00:03:24.140 TEST_HEADER include/spdk/ioat_spec.h 00:03:24.140 TEST_HEADER include/spdk/jsonrpc.h 00:03:24.140 TEST_HEADER include/spdk/json.h 00:03:24.140 TEST_HEADER include/spdk/keyring.h 00:03:24.140 TEST_HEADER include/spdk/keyring_module.h 00:03:24.140 TEST_HEADER include/spdk/likely.h 00:03:24.140 TEST_HEADER include/spdk/log.h 00:03:24.140 TEST_HEADER include/spdk/md5.h 00:03:24.140 TEST_HEADER include/spdk/lvol.h 00:03:24.140 TEST_HEADER include/spdk/mmio.h 00:03:24.140 TEST_HEADER include/spdk/memory.h 00:03:24.140 TEST_HEADER include/spdk/nbd.h 00:03:24.140 TEST_HEADER include/spdk/net.h 00:03:24.140 TEST_HEADER include/spdk/notify.h 00:03:24.140 TEST_HEADER include/spdk/nvme.h 00:03:24.140 TEST_HEADER include/spdk/nvme_intel.h 00:03:24.140 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:24.140 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:24.140 TEST_HEADER include/spdk/nvme_spec.h 00:03:24.140 TEST_HEADER include/spdk/nvme_zns.h 00:03:24.140 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:24.140 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:24.140 TEST_HEADER include/spdk/nvmf.h 00:03:24.140 TEST_HEADER include/spdk/nvmf_spec.h 00:03:24.140 TEST_HEADER include/spdk/nvmf_transport.h 00:03:24.140 TEST_HEADER include/spdk/opal.h 00:03:24.140 TEST_HEADER include/spdk/opal_spec.h 00:03:24.140 TEST_HEADER include/spdk/pci_ids.h 00:03:24.140 TEST_HEADER include/spdk/pipe.h 00:03:24.140 TEST_HEADER include/spdk/queue.h 00:03:24.140 TEST_HEADER include/spdk/reduce.h 00:03:24.140 TEST_HEADER include/spdk/rpc.h 00:03:24.140 TEST_HEADER include/spdk/scsi.h 00:03:24.140 TEST_HEADER include/spdk/scheduler.h 00:03:24.140 TEST_HEADER include/spdk/sock.h 00:03:24.140 TEST_HEADER include/spdk/scsi_spec.h 00:03:24.140 TEST_HEADER include/spdk/stdinc.h 00:03:24.140 TEST_HEADER include/spdk/string.h 00:03:24.140 TEST_HEADER include/spdk/thread.h 00:03:24.140 TEST_HEADER include/spdk/trace.h 00:03:24.140 TEST_HEADER include/spdk/trace_parser.h 00:03:24.140 TEST_HEADER include/spdk/tree.h 00:03:24.140 TEST_HEADER include/spdk/ublk.h 00:03:24.140 TEST_HEADER include/spdk/uuid.h 00:03:24.140 TEST_HEADER include/spdk/util.h 00:03:24.140 TEST_HEADER include/spdk/version.h 00:03:24.140 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:24.140 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:24.140 TEST_HEADER include/spdk/vhost.h 00:03:24.140 TEST_HEADER include/spdk/vmd.h 00:03:24.141 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:24.141 TEST_HEADER include/spdk/zipf.h 00:03:24.141 TEST_HEADER include/spdk/xor.h 00:03:24.141 CXX test/cpp_headers/accel.o 00:03:24.141 CXX test/cpp_headers/accel_module.o 00:03:24.141 CXX test/cpp_headers/assert.o 00:03:24.141 CXX test/cpp_headers/barrier.o 00:03:24.141 CXX test/cpp_headers/base64.o 00:03:24.141 CXX test/cpp_headers/bdev.o 00:03:24.141 CXX test/cpp_headers/bdev_module.o 00:03:24.141 CXX test/cpp_headers/bdev_zone.o 00:03:24.141 CC app/spdk_dd/spdk_dd.o 00:03:24.141 CXX test/cpp_headers/bit_array.o 00:03:24.141 CXX test/cpp_headers/bit_pool.o 00:03:24.141 CXX test/cpp_headers/blob_bdev.o 00:03:24.141 CXX test/cpp_headers/blobfs_bdev.o 00:03:24.141 CXX test/cpp_headers/blobfs.o 00:03:24.141 CXX test/cpp_headers/blob.o 00:03:24.141 CXX test/cpp_headers/conf.o 00:03:24.141 CXX test/cpp_headers/config.o 00:03:24.141 CXX test/cpp_headers/cpuset.o 00:03:24.141 CXX test/cpp_headers/crc16.o 00:03:24.141 CC app/nvmf_tgt/nvmf_main.o 00:03:24.141 CC app/iscsi_tgt/iscsi_tgt.o 00:03:24.141 CXX test/cpp_headers/crc32.o 00:03:24.141 CC app/spdk_tgt/spdk_tgt.o 00:03:24.409 CC test/app/histogram_perf/histogram_perf.o 00:03:24.409 CC examples/ioat/perf/perf.o 00:03:24.409 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:24.409 CC test/thread/poller_perf/poller_perf.o 00:03:24.409 CC examples/ioat/verify/verify.o 00:03:24.409 CC test/app/jsoncat/jsoncat.o 00:03:24.409 CC test/env/pci/pci_ut.o 00:03:24.409 CC app/fio/nvme/fio_plugin.o 00:03:24.409 CC test/env/memory/memory_ut.o 00:03:24.409 CC test/app/stub/stub.o 00:03:24.409 CC examples/util/zipf/zipf.o 00:03:24.409 CC test/env/vtophys/vtophys.o 00:03:24.409 CC test/dma/test_dma/test_dma.o 00:03:24.409 CC app/fio/bdev/fio_plugin.o 00:03:24.409 CC test/app/bdev_svc/bdev_svc.o 00:03:24.409 LINK spdk_lspci 00:03:24.409 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:24.409 CC test/env/mem_callbacks/mem_callbacks.o 00:03:24.669 LINK rpc_client_test 00:03:24.669 LINK spdk_nvme_discover 00:03:24.669 CXX test/cpp_headers/crc64.o 00:03:24.669 LINK poller_perf 00:03:24.669 LINK interrupt_tgt 00:03:24.669 CXX test/cpp_headers/dif.o 00:03:24.669 LINK histogram_perf 00:03:24.669 LINK zipf 00:03:24.669 LINK vtophys 00:03:24.669 CXX test/cpp_headers/dma.o 00:03:24.669 CXX test/cpp_headers/endian.o 00:03:24.669 LINK env_dpdk_post_init 00:03:24.669 LINK nvmf_tgt 00:03:24.669 CXX test/cpp_headers/env_dpdk.o 00:03:24.669 CXX test/cpp_headers/env.o 00:03:24.669 CXX test/cpp_headers/event.o 00:03:24.669 LINK jsoncat 00:03:24.669 CXX test/cpp_headers/fd_group.o 00:03:24.669 CXX test/cpp_headers/fd.o 00:03:24.669 CXX test/cpp_headers/file.o 00:03:24.669 CXX test/cpp_headers/fsdev.o 00:03:24.669 LINK spdk_trace_record 00:03:24.669 LINK stub 00:03:24.669 CXX test/cpp_headers/fsdev_module.o 00:03:24.669 CXX test/cpp_headers/fuse_dispatcher.o 00:03:24.669 CXX test/cpp_headers/ftl.o 00:03:24.669 LINK iscsi_tgt 00:03:24.669 LINK spdk_tgt 00:03:24.669 CXX test/cpp_headers/gpt_spec.o 00:03:24.669 LINK bdev_svc 00:03:24.669 CXX test/cpp_headers/hexlify.o 00:03:24.669 LINK ioat_perf 00:03:24.933 LINK verify 00:03:24.933 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:24.933 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:24.933 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:24.933 CXX test/cpp_headers/histogram_data.o 00:03:24.933 CXX test/cpp_headers/idxd.o 00:03:24.933 CXX test/cpp_headers/idxd_spec.o 00:03:24.933 CXX test/cpp_headers/init.o 00:03:24.933 CXX test/cpp_headers/ioat.o 00:03:24.934 LINK spdk_dd 00:03:24.934 CXX test/cpp_headers/ioat_spec.o 00:03:24.934 CXX test/cpp_headers/iscsi_spec.o 00:03:24.934 CXX test/cpp_headers/json.o 00:03:24.934 CXX test/cpp_headers/jsonrpc.o 00:03:25.204 LINK spdk_trace 00:03:25.204 CXX test/cpp_headers/keyring.o 00:03:25.204 CXX test/cpp_headers/keyring_module.o 00:03:25.204 CXX test/cpp_headers/likely.o 00:03:25.204 CXX test/cpp_headers/log.o 00:03:25.204 LINK pci_ut 00:03:25.204 CXX test/cpp_headers/lvol.o 00:03:25.204 CXX test/cpp_headers/md5.o 00:03:25.204 CXX test/cpp_headers/memory.o 00:03:25.204 CXX test/cpp_headers/mmio.o 00:03:25.204 CXX test/cpp_headers/nbd.o 00:03:25.204 CXX test/cpp_headers/net.o 00:03:25.204 CXX test/cpp_headers/notify.o 00:03:25.204 CXX test/cpp_headers/nvme.o 00:03:25.204 CXX test/cpp_headers/nvme_intel.o 00:03:25.204 CXX test/cpp_headers/nvme_ocssd.o 00:03:25.204 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:25.204 CXX test/cpp_headers/nvme_spec.o 00:03:25.204 CXX test/cpp_headers/nvme_zns.o 00:03:25.204 CC test/event/event_perf/event_perf.o 00:03:25.204 CC test/event/reactor/reactor.o 00:03:25.204 CC test/event/reactor_perf/reactor_perf.o 00:03:25.204 CC examples/sock/hello_world/hello_sock.o 00:03:25.204 CXX test/cpp_headers/nvmf_cmd.o 00:03:25.204 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:25.204 CC examples/vmd/lsvmd/lsvmd.o 00:03:25.204 CC test/event/app_repeat/app_repeat.o 00:03:25.465 CXX test/cpp_headers/nvmf.o 00:03:25.465 CXX test/cpp_headers/nvmf_spec.o 00:03:25.465 CC examples/vmd/led/led.o 00:03:25.465 CC examples/idxd/perf/perf.o 00:03:25.465 CXX test/cpp_headers/nvmf_transport.o 00:03:25.465 LINK spdk_nvme 00:03:25.465 CXX test/cpp_headers/opal.o 00:03:25.465 LINK nvme_fuzz 00:03:25.465 CC examples/thread/thread/thread_ex.o 00:03:25.465 LINK spdk_bdev 00:03:25.465 CXX test/cpp_headers/opal_spec.o 00:03:25.465 LINK test_dma 00:03:25.465 CXX test/cpp_headers/pci_ids.o 00:03:25.465 CC test/event/scheduler/scheduler.o 00:03:25.465 CXX test/cpp_headers/pipe.o 00:03:25.465 CXX test/cpp_headers/queue.o 00:03:25.465 CXX test/cpp_headers/reduce.o 00:03:25.465 CXX test/cpp_headers/rpc.o 00:03:25.465 CXX test/cpp_headers/scheduler.o 00:03:25.465 CXX test/cpp_headers/scsi.o 00:03:25.465 CXX test/cpp_headers/scsi_spec.o 00:03:25.465 CXX test/cpp_headers/sock.o 00:03:25.465 CXX test/cpp_headers/stdinc.o 00:03:25.465 CXX test/cpp_headers/string.o 00:03:25.465 CXX test/cpp_headers/thread.o 00:03:25.465 CXX test/cpp_headers/trace.o 00:03:25.465 CXX test/cpp_headers/trace_parser.o 00:03:25.465 CXX test/cpp_headers/tree.o 00:03:25.465 LINK reactor 00:03:25.465 CXX test/cpp_headers/ublk.o 00:03:25.465 LINK event_perf 00:03:25.728 CXX test/cpp_headers/util.o 00:03:25.728 LINK reactor_perf 00:03:25.728 CXX test/cpp_headers/uuid.o 00:03:25.728 CXX test/cpp_headers/version.o 00:03:25.728 LINK lsvmd 00:03:25.728 CXX test/cpp_headers/vfio_user_pci.o 00:03:25.728 CXX test/cpp_headers/vfio_user_spec.o 00:03:25.728 CXX test/cpp_headers/vmd.o 00:03:25.728 LINK vhost_fuzz 00:03:25.728 LINK app_repeat 00:03:25.728 CXX test/cpp_headers/vhost.o 00:03:25.728 LINK led 00:03:25.728 CC app/vhost/vhost.o 00:03:25.728 CXX test/cpp_headers/xor.o 00:03:25.728 LINK spdk_nvme_perf 00:03:25.728 CXX test/cpp_headers/zipf.o 00:03:25.728 LINK mem_callbacks 00:03:25.728 LINK spdk_nvme_identify 00:03:25.728 LINK hello_sock 00:03:25.988 LINK spdk_top 00:03:25.988 LINK thread 00:03:25.988 LINK scheduler 00:03:25.988 LINK idxd_perf 00:03:25.988 CC test/nvme/connect_stress/connect_stress.o 00:03:25.988 CC test/nvme/sgl/sgl.o 00:03:25.988 CC test/nvme/overhead/overhead.o 00:03:25.988 CC test/nvme/simple_copy/simple_copy.o 00:03:25.988 CC test/nvme/e2edp/nvme_dp.o 00:03:25.988 CC test/nvme/err_injection/err_injection.o 00:03:25.989 CC test/nvme/reset/reset.o 00:03:25.989 CC test/nvme/boot_partition/boot_partition.o 00:03:25.989 CC test/nvme/fused_ordering/fused_ordering.o 00:03:25.989 CC test/nvme/aer/aer.o 00:03:25.989 CC test/nvme/reserve/reserve.o 00:03:25.989 CC test/nvme/startup/startup.o 00:03:25.989 CC test/nvme/cuse/cuse.o 00:03:25.989 CC test/nvme/fdp/fdp.o 00:03:25.989 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:25.989 CC test/nvme/compliance/nvme_compliance.o 00:03:25.989 LINK vhost 00:03:25.989 CC test/blobfs/mkfs/mkfs.o 00:03:25.989 CC test/accel/dif/dif.o 00:03:26.248 CC test/lvol/esnap/esnap.o 00:03:26.248 CC examples/nvme/hello_world/hello_world.o 00:03:26.248 CC examples/nvme/hotplug/hotplug.o 00:03:26.248 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:26.248 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:26.248 CC examples/nvme/reconnect/reconnect.o 00:03:26.248 CC examples/nvme/abort/abort.o 00:03:26.248 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:26.248 CC examples/nvme/arbitration/arbitration.o 00:03:26.248 LINK startup 00:03:26.248 LINK fused_ordering 00:03:26.248 LINK doorbell_aers 00:03:26.506 LINK boot_partition 00:03:26.506 LINK err_injection 00:03:26.506 LINK sgl 00:03:26.506 LINK connect_stress 00:03:26.506 LINK reset 00:03:26.506 CC examples/accel/perf/accel_perf.o 00:03:26.506 LINK memory_ut 00:03:26.506 LINK overhead 00:03:26.506 CC examples/blob/hello_world/hello_blob.o 00:03:26.506 LINK aer 00:03:26.506 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:26.506 LINK nvme_compliance 00:03:26.506 LINK reserve 00:03:26.506 CC examples/blob/cli/blobcli.o 00:03:26.506 LINK mkfs 00:03:26.506 LINK simple_copy 00:03:26.506 LINK nvme_dp 00:03:26.506 LINK fdp 00:03:26.506 LINK hotplug 00:03:26.506 LINK pmr_persistence 00:03:26.766 LINK cmb_copy 00:03:26.766 LINK hello_world 00:03:26.766 LINK abort 00:03:26.766 LINK reconnect 00:03:26.766 LINK hello_blob 00:03:26.766 LINK arbitration 00:03:26.766 LINK hello_fsdev 00:03:27.025 LINK nvme_manage 00:03:27.025 LINK accel_perf 00:03:27.025 LINK dif 00:03:27.025 LINK blobcli 00:03:27.283 LINK iscsi_fuzz 00:03:27.283 CC examples/bdev/hello_world/hello_bdev.o 00:03:27.283 CC examples/bdev/bdevperf/bdevperf.o 00:03:27.283 CC test/bdev/bdevio/bdevio.o 00:03:27.542 LINK hello_bdev 00:03:27.800 LINK cuse 00:03:27.800 LINK bdevio 00:03:28.369 LINK bdevperf 00:03:28.661 CC examples/nvmf/nvmf/nvmf.o 00:03:28.919 LINK nvmf 00:03:31.451 LINK esnap 00:03:31.451 00:03:31.451 real 1m8.127s 00:03:31.451 user 9m5.548s 00:03:31.451 sys 1m56.677s 00:03:31.451 04:39:22 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:31.451 04:39:22 make -- common/autotest_common.sh@10 -- $ set +x 00:03:31.451 ************************************ 00:03:31.451 END TEST make 00:03:31.451 ************************************ 00:03:31.710 04:39:22 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:31.710 04:39:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:31.710 04:39:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:31.710 04:39:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.710 04:39:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:31.710 04:39:22 -- pm/common@44 -- $ pid=2086304 00:03:31.710 04:39:22 -- pm/common@50 -- $ kill -TERM 2086304 00:03:31.710 04:39:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.710 04:39:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:31.710 04:39:22 -- pm/common@44 -- $ pid=2086306 00:03:31.710 04:39:22 -- pm/common@50 -- $ kill -TERM 2086306 00:03:31.710 04:39:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.710 04:39:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:31.710 04:39:22 -- pm/common@44 -- $ pid=2086307 00:03:31.710 04:39:22 -- pm/common@50 -- $ kill -TERM 2086307 00:03:31.710 04:39:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.710 04:39:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:31.710 04:39:22 -- pm/common@44 -- $ pid=2086336 00:03:31.710 04:39:22 -- pm/common@50 -- $ sudo -E kill -TERM 2086336 00:03:31.710 04:39:22 -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:03:31.710 04:39:22 -- common/autotest_common.sh@1689 -- # lcov --version 00:03:31.710 04:39:22 -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:03:31.710 04:39:22 -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:03:31.710 04:39:22 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.710 04:39:22 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.710 04:39:22 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.710 04:39:22 -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.710 04:39:22 -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.710 04:39:22 -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.710 04:39:22 -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.710 04:39:22 -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.710 04:39:22 -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.710 04:39:22 -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.710 04:39:22 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.710 04:39:22 -- scripts/common.sh@344 -- # case "$op" in 00:03:31.710 04:39:22 -- scripts/common.sh@345 -- # : 1 00:03:31.710 04:39:22 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.710 04:39:22 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.710 04:39:22 -- scripts/common.sh@365 -- # decimal 1 00:03:31.710 04:39:22 -- scripts/common.sh@353 -- # local d=1 00:03:31.710 04:39:22 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.711 04:39:22 -- scripts/common.sh@355 -- # echo 1 00:03:31.711 04:39:22 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.711 04:39:22 -- scripts/common.sh@366 -- # decimal 2 00:03:31.711 04:39:22 -- scripts/common.sh@353 -- # local d=2 00:03:31.711 04:39:22 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.711 04:39:22 -- scripts/common.sh@355 -- # echo 2 00:03:31.711 04:39:22 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.711 04:39:22 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.711 04:39:22 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.711 04:39:22 -- scripts/common.sh@368 -- # return 0 00:03:31.711 04:39:22 -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.711 04:39:22 -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:03:31.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.711 --rc genhtml_branch_coverage=1 00:03:31.711 --rc genhtml_function_coverage=1 00:03:31.711 --rc genhtml_legend=1 00:03:31.711 --rc geninfo_all_blocks=1 00:03:31.711 --rc geninfo_unexecuted_blocks=1 00:03:31.711 00:03:31.711 ' 00:03:31.711 04:39:22 -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:03:31.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.711 --rc genhtml_branch_coverage=1 00:03:31.711 --rc genhtml_function_coverage=1 00:03:31.711 --rc genhtml_legend=1 00:03:31.711 --rc geninfo_all_blocks=1 00:03:31.711 --rc geninfo_unexecuted_blocks=1 00:03:31.711 00:03:31.711 ' 00:03:31.711 04:39:22 -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:03:31.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.711 --rc genhtml_branch_coverage=1 00:03:31.711 --rc genhtml_function_coverage=1 00:03:31.711 --rc genhtml_legend=1 00:03:31.711 --rc geninfo_all_blocks=1 00:03:31.711 --rc geninfo_unexecuted_blocks=1 00:03:31.711 00:03:31.711 ' 00:03:31.711 04:39:22 -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:03:31.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.711 --rc genhtml_branch_coverage=1 00:03:31.711 --rc genhtml_function_coverage=1 00:03:31.711 --rc genhtml_legend=1 00:03:31.711 --rc geninfo_all_blocks=1 00:03:31.711 --rc geninfo_unexecuted_blocks=1 00:03:31.711 00:03:31.711 ' 00:03:31.711 04:39:22 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:31.711 04:39:22 -- nvmf/common.sh@7 -- # uname -s 00:03:31.711 04:39:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:31.711 04:39:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:31.711 04:39:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:31.711 04:39:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:31.711 04:39:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:31.711 04:39:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:31.711 04:39:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:31.711 04:39:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:31.711 04:39:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:31.711 04:39:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:31.711 04:39:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:31.711 04:39:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:31.711 04:39:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:31.711 04:39:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:31.711 04:39:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:31.711 04:39:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:31.711 04:39:22 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:31.711 04:39:22 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:31.711 04:39:22 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:31.711 04:39:22 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:31.711 04:39:22 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:31.711 04:39:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.711 04:39:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.711 04:39:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.711 04:39:22 -- paths/export.sh@5 -- # export PATH 00:03:31.711 04:39:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.711 04:39:22 -- nvmf/common.sh@51 -- # : 0 00:03:31.711 04:39:22 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:31.711 04:39:22 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:31.711 04:39:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:31.711 04:39:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:31.711 04:39:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:31.711 04:39:22 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:31.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:31.711 04:39:22 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:31.711 04:39:22 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:31.711 04:39:22 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:31.711 04:39:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:31.711 04:39:22 -- spdk/autotest.sh@32 -- # uname -s 00:03:31.711 04:39:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:31.711 04:39:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:31.711 04:39:22 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:31.711 04:39:22 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:31.711 04:39:22 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:31.711 04:39:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:31.711 04:39:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:31.711 04:39:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:31.711 04:39:22 -- spdk/autotest.sh@48 -- # udevadm_pid=2162922 00:03:31.711 04:39:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:31.711 04:39:22 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:31.711 04:39:22 -- pm/common@17 -- # local monitor 00:03:31.711 04:39:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.711 04:39:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.711 04:39:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.711 04:39:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.711 04:39:22 -- pm/common@25 -- # sleep 1 00:03:31.711 04:39:22 -- pm/common@21 -- # date +%s 00:03:31.711 04:39:22 -- pm/common@21 -- # date +%s 00:03:31.711 04:39:22 -- pm/common@21 -- # date +%s 00:03:31.711 04:39:22 -- pm/common@21 -- # date +%s 00:03:31.711 04:39:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730086762 00:03:31.711 04:39:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730086762 00:03:31.711 04:39:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730086762 00:03:31.711 04:39:22 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730086762 00:03:31.711 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730086762_collect-cpu-load.pm.log 00:03:31.711 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730086762_collect-vmstat.pm.log 00:03:31.711 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730086762_collect-cpu-temp.pm.log 00:03:31.711 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730086762_collect-bmc-pm.bmc.pm.log 00:03:33.091 04:39:23 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:33.091 04:39:23 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:33.091 04:39:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:33.091 04:39:23 -- common/autotest_common.sh@10 -- # set +x 00:03:33.091 04:39:23 -- spdk/autotest.sh@59 -- # create_test_list 00:03:33.091 04:39:23 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:33.091 04:39:23 -- common/autotest_common.sh@10 -- # set +x 00:03:33.091 04:39:23 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:33.091 04:39:23 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:33.091 04:39:23 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:33.091 04:39:23 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:33.091 04:39:23 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:33.091 04:39:23 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:33.091 04:39:23 -- common/autotest_common.sh@1453 -- # uname 00:03:33.091 04:39:23 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:03:33.091 04:39:23 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:33.091 04:39:23 -- common/autotest_common.sh@1473 -- # uname 00:03:33.091 04:39:23 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:03:33.091 04:39:23 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:33.091 04:39:23 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:33.091 lcov: LCOV version 1.15 00:03:33.092 04:39:23 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:05.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:05.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:11.784 04:40:02 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:11.784 04:40:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:11.784 04:40:02 -- common/autotest_common.sh@10 -- # set +x 00:04:11.784 04:40:02 -- spdk/autotest.sh@78 -- # rm -f 00:04:11.784 04:40:02 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.716 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:12.716 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:12.716 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:12.716 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:12.716 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:12.716 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:12.716 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:12.716 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:12.716 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:12.716 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:12.974 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:12.974 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:12.974 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:12.974 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:12.974 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:12.974 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:12.974 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:12.974 04:40:03 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:12.974 04:40:03 -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:04:12.974 04:40:03 -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:04:12.974 04:40:03 -- common/autotest_common.sh@1654 -- # local nvme bdf 00:04:12.974 04:40:03 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:12.974 04:40:03 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:04:12.974 04:40:03 -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:04:12.974 04:40:03 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.974 04:40:03 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:12.974 04:40:03 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:12.974 04:40:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.974 04:40:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.974 04:40:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:12.974 04:40:03 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:12.974 04:40:03 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:12.974 No valid GPT data, bailing 00:04:12.974 04:40:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:12.974 04:40:03 -- scripts/common.sh@394 -- # pt= 00:04:12.974 04:40:03 -- scripts/common.sh@395 -- # return 1 00:04:12.974 04:40:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:12.974 1+0 records in 00:04:12.974 1+0 records out 00:04:12.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00218897 s, 479 MB/s 00:04:12.974 04:40:03 -- spdk/autotest.sh@105 -- # sync 00:04:12.974 04:40:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:12.974 04:40:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:12.974 04:40:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:15.505 04:40:05 -- spdk/autotest.sh@111 -- # uname -s 00:04:15.505 04:40:05 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:15.505 04:40:05 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:15.505 04:40:05 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:16.071 Hugepages 00:04:16.072 node hugesize free / total 00:04:16.072 node0 1048576kB 0 / 0 00:04:16.072 node0 2048kB 0 / 0 00:04:16.072 node1 1048576kB 0 / 0 00:04:16.072 node1 2048kB 0 / 0 00:04:16.072 00:04:16.072 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:16.072 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:16.072 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:16.072 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:16.072 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:16.330 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:16.330 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:16.330 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:16.330 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:16.330 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:16.330 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:16.330 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:16.330 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:16.330 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:16.330 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:16.330 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:16.330 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:16.330 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:16.330 04:40:06 -- spdk/autotest.sh@117 -- # uname -s 00:04:16.330 04:40:06 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:16.330 04:40:06 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:16.330 04:40:06 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:17.705 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:17.705 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:17.705 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:17.705 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:17.705 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:17.705 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:17.705 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:17.705 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:17.705 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:17.705 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:17.705 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:17.705 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:17.705 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:17.705 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:17.705 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:17.705 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:18.640 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:18.640 04:40:09 -- common/autotest_common.sh@1513 -- # sleep 1 00:04:20.018 04:40:10 -- common/autotest_common.sh@1514 -- # bdfs=() 00:04:20.018 04:40:10 -- common/autotest_common.sh@1514 -- # local bdfs 00:04:20.018 04:40:10 -- common/autotest_common.sh@1516 -- # bdfs=($(get_nvme_bdfs)) 00:04:20.018 04:40:10 -- common/autotest_common.sh@1516 -- # get_nvme_bdfs 00:04:20.018 04:40:10 -- common/autotest_common.sh@1494 -- # bdfs=() 00:04:20.018 04:40:10 -- common/autotest_common.sh@1494 -- # local bdfs 00:04:20.018 04:40:10 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.018 04:40:10 -- common/autotest_common.sh@1495 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:20.018 04:40:10 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:04:20.018 04:40:10 -- common/autotest_common.sh@1496 -- # (( 1 == 0 )) 00:04:20.018 04:40:10 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:88:00.0 00:04:20.018 04:40:10 -- common/autotest_common.sh@1518 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.953 Waiting for block devices as requested 00:04:20.953 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:20.953 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:20.953 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:21.212 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:21.212 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:21.212 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:21.472 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:21.472 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:21.472 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:21.472 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:21.472 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:21.731 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:21.731 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:21.731 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:21.731 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:21.989 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:21.989 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:21.989 04:40:12 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:04:21.989 04:40:12 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:21.989 04:40:12 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 00:04:21.989 04:40:12 -- common/autotest_common.sh@1483 -- # grep 0000:88:00.0/nvme/nvme 00:04:21.989 04:40:12 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:21.989 04:40:12 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:21.989 04:40:12 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:21.989 04:40:12 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme0 00:04:21.989 04:40:12 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme0 00:04:21.989 04:40:12 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme0 ]] 00:04:21.989 04:40:12 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme0 00:04:21.989 04:40:12 -- common/autotest_common.sh@1527 -- # grep oacs 00:04:21.989 04:40:12 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:04:22.248 04:40:12 -- common/autotest_common.sh@1527 -- # oacs=' 0xf' 00:04:22.248 04:40:12 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:04:22.248 04:40:12 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:04:22.248 04:40:12 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme0 00:04:22.248 04:40:12 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:04:22.248 04:40:12 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:04:22.248 04:40:12 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:04:22.248 04:40:12 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:04:22.248 04:40:12 -- common/autotest_common.sh@1539 -- # continue 00:04:22.248 04:40:12 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:22.248 04:40:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:22.248 04:40:12 -- common/autotest_common.sh@10 -- # set +x 00:04:22.248 04:40:12 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:22.248 04:40:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.248 04:40:12 -- common/autotest_common.sh@10 -- # set +x 00:04:22.248 04:40:12 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:23.626 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:23.626 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:23.626 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:23.626 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:23.626 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:23.626 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:23.626 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:23.626 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:23.626 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:23.626 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:23.626 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:23.626 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:23.626 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:23.626 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:23.626 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:23.626 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:24.563 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:24.563 04:40:14 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:24.563 04:40:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:24.563 04:40:14 -- common/autotest_common.sh@10 -- # set +x 00:04:24.563 04:40:14 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:24.563 04:40:14 -- common/autotest_common.sh@1574 -- # mapfile -t bdfs 00:04:24.563 04:40:14 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs_by_id 0x0a54 00:04:24.563 04:40:14 -- common/autotest_common.sh@1559 -- # bdfs=() 00:04:24.563 04:40:14 -- common/autotest_common.sh@1559 -- # _bdfs=() 00:04:24.563 04:40:14 -- common/autotest_common.sh@1559 -- # local bdfs _bdfs 00:04:24.563 04:40:14 -- common/autotest_common.sh@1560 -- # _bdfs=($(get_nvme_bdfs)) 00:04:24.563 04:40:14 -- common/autotest_common.sh@1560 -- # get_nvme_bdfs 00:04:24.563 04:40:14 -- common/autotest_common.sh@1494 -- # bdfs=() 00:04:24.563 04:40:14 -- common/autotest_common.sh@1494 -- # local bdfs 00:04:24.563 04:40:14 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.563 04:40:14 -- common/autotest_common.sh@1495 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:24.563 04:40:14 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:04:24.563 04:40:15 -- common/autotest_common.sh@1496 -- # (( 1 == 0 )) 00:04:24.563 04:40:15 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:88:00.0 00:04:24.563 04:40:15 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:04:24.563 04:40:15 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:24.563 04:40:15 -- common/autotest_common.sh@1562 -- # device=0x0a54 00:04:24.563 04:40:15 -- common/autotest_common.sh@1563 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:24.563 04:40:15 -- common/autotest_common.sh@1564 -- # bdfs+=($bdf) 00:04:24.563 04:40:15 -- common/autotest_common.sh@1568 -- # (( 1 > 0 )) 00:04:24.563 04:40:15 -- common/autotest_common.sh@1569 -- # printf '%s\n' 0000:88:00.0 00:04:24.563 04:40:15 -- common/autotest_common.sh@1575 -- # [[ -z 0000:88:00.0 ]] 00:04:24.563 04:40:15 -- common/autotest_common.sh@1580 -- # spdk_tgt_pid=2173700 00:04:24.563 04:40:15 -- common/autotest_common.sh@1579 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.563 04:40:15 -- common/autotest_common.sh@1581 -- # waitforlisten 2173700 00:04:24.563 04:40:15 -- common/autotest_common.sh@831 -- # '[' -z 2173700 ']' 00:04:24.563 04:40:15 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.563 04:40:15 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:24.564 04:40:15 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.564 04:40:15 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:24.564 04:40:15 -- common/autotest_common.sh@10 -- # set +x 00:04:24.564 [2024-10-28 04:40:15.121235] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:04:24.564 [2024-10-28 04:40:15.121333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2173700 ] 00:04:24.822 [2024-10-28 04:40:15.254573] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:24.822 [2024-10-28 04:40:15.296141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.822 [2024-10-28 04:40:15.345484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.753 04:40:16 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:25.753 04:40:16 -- common/autotest_common.sh@864 -- # return 0 00:04:25.753 04:40:16 -- common/autotest_common.sh@1583 -- # bdf_id=0 00:04:25.753 04:40:16 -- common/autotest_common.sh@1584 -- # for bdf in "${bdfs[@]}" 00:04:25.753 04:40:16 -- common/autotest_common.sh@1585 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:29.030 nvme0n1 00:04:29.030 04:40:19 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:29.030 [2024-10-28 04:40:19.491560] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:29.030 [2024-10-28 04:40:19.491612] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:29.030 request: 00:04:29.030 { 00:04:29.030 "nvme_ctrlr_name": "nvme0", 00:04:29.030 "password": "test", 00:04:29.030 "method": "bdev_nvme_opal_revert", 00:04:29.030 "req_id": 1 00:04:29.030 } 00:04:29.030 Got JSON-RPC error response 00:04:29.030 response: 00:04:29.030 { 00:04:29.030 "code": -32603, 00:04:29.030 "message": "Internal error" 00:04:29.030 } 00:04:29.030 04:40:19 -- common/autotest_common.sh@1587 -- # true 00:04:29.030 04:40:19 -- common/autotest_common.sh@1588 -- # (( ++bdf_id )) 00:04:29.030 04:40:19 -- common/autotest_common.sh@1591 -- # killprocess 2173700 00:04:29.030 04:40:19 -- common/autotest_common.sh@950 -- # '[' -z 2173700 ']' 00:04:29.030 04:40:19 -- common/autotest_common.sh@954 -- # kill -0 2173700 00:04:29.030 04:40:19 -- common/autotest_common.sh@955 -- # uname 00:04:29.030 04:40:19 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:29.030 04:40:19 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2173700 00:04:29.030 04:40:19 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:29.030 04:40:19 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:29.030 04:40:19 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2173700' 00:04:29.030 killing process with pid 2173700 00:04:29.030 04:40:19 -- common/autotest_common.sh@969 -- # kill 2173700 00:04:29.030 04:40:19 -- common/autotest_common.sh@974 -- # wait 2173700 00:04:30.925 04:40:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:30.925 04:40:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:30.925 04:40:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:30.925 04:40:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:30.925 04:40:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:30.925 04:40:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:30.925 04:40:21 -- common/autotest_common.sh@10 -- # set +x 00:04:30.925 04:40:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:30.925 04:40:21 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:30.925 04:40:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.925 04:40:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.925 04:40:21 -- common/autotest_common.sh@10 -- # set +x 00:04:30.925 ************************************ 00:04:30.925 START TEST env 00:04:30.925 ************************************ 00:04:30.925 04:40:21 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:30.925 * Looking for test storage... 00:04:30.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:30.925 04:40:21 env -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:30.925 04:40:21 env -- common/autotest_common.sh@1689 -- # lcov --version 00:04:30.925 04:40:21 env -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:31.184 04:40:21 env -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:31.184 04:40:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.184 04:40:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.184 04:40:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.184 04:40:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.184 04:40:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.184 04:40:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.184 04:40:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.184 04:40:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.184 04:40:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.184 04:40:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.184 04:40:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.184 04:40:21 env -- scripts/common.sh@344 -- # case "$op" in 00:04:31.184 04:40:21 env -- scripts/common.sh@345 -- # : 1 00:04:31.184 04:40:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.184 04:40:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.184 04:40:21 env -- scripts/common.sh@365 -- # decimal 1 00:04:31.184 04:40:21 env -- scripts/common.sh@353 -- # local d=1 00:04:31.184 04:40:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.184 04:40:21 env -- scripts/common.sh@355 -- # echo 1 00:04:31.184 04:40:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.184 04:40:21 env -- scripts/common.sh@366 -- # decimal 2 00:04:31.184 04:40:21 env -- scripts/common.sh@353 -- # local d=2 00:04:31.184 04:40:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.184 04:40:21 env -- scripts/common.sh@355 -- # echo 2 00:04:31.184 04:40:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.184 04:40:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.184 04:40:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.184 04:40:21 env -- scripts/common.sh@368 -- # return 0 00:04:31.185 04:40:21 env -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.185 04:40:21 env -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:31.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.185 --rc genhtml_branch_coverage=1 00:04:31.185 --rc genhtml_function_coverage=1 00:04:31.185 --rc genhtml_legend=1 00:04:31.185 --rc geninfo_all_blocks=1 00:04:31.185 --rc geninfo_unexecuted_blocks=1 00:04:31.185 00:04:31.185 ' 00:04:31.185 04:40:21 env -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:31.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.185 --rc genhtml_branch_coverage=1 00:04:31.185 --rc genhtml_function_coverage=1 00:04:31.185 --rc genhtml_legend=1 00:04:31.185 --rc geninfo_all_blocks=1 00:04:31.185 --rc geninfo_unexecuted_blocks=1 00:04:31.185 00:04:31.185 ' 00:04:31.185 04:40:21 env -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:31.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.185 --rc genhtml_branch_coverage=1 00:04:31.185 --rc genhtml_function_coverage=1 00:04:31.185 --rc genhtml_legend=1 00:04:31.185 --rc geninfo_all_blocks=1 00:04:31.185 --rc geninfo_unexecuted_blocks=1 00:04:31.185 00:04:31.185 ' 00:04:31.185 04:40:21 env -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:31.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.185 --rc genhtml_branch_coverage=1 00:04:31.185 --rc genhtml_function_coverage=1 00:04:31.185 --rc genhtml_legend=1 00:04:31.185 --rc geninfo_all_blocks=1 00:04:31.185 --rc geninfo_unexecuted_blocks=1 00:04:31.185 00:04:31.185 ' 00:04:31.185 04:40:21 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:31.185 04:40:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.185 04:40:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.185 04:40:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.185 ************************************ 00:04:31.185 START TEST env_memory 00:04:31.185 ************************************ 00:04:31.185 04:40:21 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:31.185 00:04:31.185 00:04:31.185 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.185 http://cunit.sourceforge.net/ 00:04:31.185 00:04:31.185 00:04:31.185 Suite: memory 00:04:31.185 Test: alloc and free memory map ...[2024-10-28 04:40:21.599809] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:31.185 passed 00:04:31.185 Test: mem map translation ...[2024-10-28 04:40:21.621472] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:31.185 [2024-10-28 04:40:21.621494] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:31.185 [2024-10-28 04:40:21.621545] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:31.185 [2024-10-28 04:40:21.621557] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:31.185 passed 00:04:31.185 Test: mem map registration ...[2024-10-28 04:40:21.663861] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:31.185 [2024-10-28 04:40:21.663882] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:31.185 passed 00:04:31.185 Test: mem map adjacent registrations ...passed 00:04:31.185 00:04:31.185 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.185 suites 1 1 n/a 0 0 00:04:31.185 tests 4 4 4 0 0 00:04:31.185 asserts 152 152 152 0 n/a 00:04:31.185 00:04:31.185 Elapsed time = 0.145 seconds 00:04:31.185 00:04:31.185 real 0m0.153s 00:04:31.185 user 0m0.147s 00:04:31.185 sys 0m0.006s 00:04:31.185 04:40:21 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.185 04:40:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:31.185 ************************************ 00:04:31.185 END TEST env_memory 00:04:31.185 ************************************ 00:04:31.185 04:40:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:31.185 04:40:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.185 04:40:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.185 04:40:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.185 ************************************ 00:04:31.185 START TEST env_vtophys 00:04:31.185 ************************************ 00:04:31.185 04:40:21 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:31.185 EAL: lib.eal log level changed from notice to debug 00:04:31.185 EAL: Detected lcore 0 as core 0 on socket 0 00:04:31.185 EAL: Detected lcore 1 as core 1 on socket 0 00:04:31.185 EAL: Detected lcore 2 as core 2 on socket 0 00:04:31.185 EAL: Detected lcore 3 as core 3 on socket 0 00:04:31.185 EAL: Detected lcore 4 as core 4 on socket 0 00:04:31.185 EAL: Detected lcore 5 as core 5 on socket 0 00:04:31.185 EAL: Detected lcore 6 as core 8 on socket 0 00:04:31.185 EAL: Detected lcore 7 as core 9 on socket 0 00:04:31.185 EAL: Detected lcore 8 as core 10 on socket 0 00:04:31.185 EAL: Detected lcore 9 as core 11 on socket 0 00:04:31.185 EAL: Detected lcore 10 as core 12 on socket 0 00:04:31.185 EAL: Detected lcore 11 as core 13 on socket 0 00:04:31.185 EAL: Detected lcore 12 as core 0 on socket 1 00:04:31.185 EAL: Detected lcore 13 as core 1 on socket 1 00:04:31.185 EAL: Detected lcore 14 as core 2 on socket 1 00:04:31.185 EAL: Detected lcore 15 as core 3 on socket 1 00:04:31.185 EAL: Detected lcore 16 as core 4 on socket 1 00:04:31.185 EAL: Detected lcore 17 as core 5 on socket 1 00:04:31.185 EAL: Detected lcore 18 as core 8 on socket 1 00:04:31.185 EAL: Detected lcore 19 as core 9 on socket 1 00:04:31.185 EAL: Detected lcore 20 as core 10 on socket 1 00:04:31.185 EAL: Detected lcore 21 as core 11 on socket 1 00:04:31.185 EAL: Detected lcore 22 as core 12 on socket 1 00:04:31.185 EAL: Detected lcore 23 as core 13 on socket 1 00:04:31.185 EAL: Detected lcore 24 as core 0 on socket 0 00:04:31.185 EAL: Detected lcore 25 as core 1 on socket 0 00:04:31.185 EAL: Detected lcore 26 as core 2 on socket 0 00:04:31.185 EAL: Detected lcore 27 as core 3 on socket 0 00:04:31.185 EAL: Detected lcore 28 as core 4 on socket 0 00:04:31.185 EAL: Detected lcore 29 as core 5 on socket 0 00:04:31.185 EAL: Detected lcore 30 as core 8 on socket 0 00:04:31.185 EAL: Detected lcore 31 as core 9 on socket 0 00:04:31.185 EAL: Detected lcore 32 as core 10 on socket 0 00:04:31.185 EAL: Detected lcore 33 as core 11 on socket 0 00:04:31.185 EAL: Detected lcore 34 as core 12 on socket 0 00:04:31.185 EAL: Detected lcore 35 as core 13 on socket 0 00:04:31.185 EAL: Detected lcore 36 as core 0 on socket 1 00:04:31.185 EAL: Detected lcore 37 as core 1 on socket 1 00:04:31.185 EAL: Detected lcore 38 as core 2 on socket 1 00:04:31.185 EAL: Detected lcore 39 as core 3 on socket 1 00:04:31.185 EAL: Detected lcore 40 as core 4 on socket 1 00:04:31.185 EAL: Detected lcore 41 as core 5 on socket 1 00:04:31.185 EAL: Detected lcore 42 as core 8 on socket 1 00:04:31.185 EAL: Detected lcore 43 as core 9 on socket 1 00:04:31.185 EAL: Detected lcore 44 as core 10 on socket 1 00:04:31.185 EAL: Detected lcore 45 as core 11 on socket 1 00:04:31.185 EAL: Detected lcore 46 as core 12 on socket 1 00:04:31.185 EAL: Detected lcore 47 as core 13 on socket 1 00:04:31.443 EAL: Maximum logical cores by configuration: 128 00:04:31.443 EAL: Detected CPU lcores: 48 00:04:31.443 EAL: Detected NUMA nodes: 2 00:04:31.443 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:04:31.443 EAL: Detected shared linkage of DPDK 00:04:31.444 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:04:31.444 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:04:31.444 EAL: Registered [vdev] bus. 00:04:31.444 EAL: bus.vdev log level changed from disabled to notice 00:04:31.444 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:04:31.444 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:04:31.444 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:31.444 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:31.444 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:04:31.444 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:04:31.444 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:04:31.444 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:04:31.444 EAL: No shared files mode enabled, IPC will be disabled 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: Bus pci wants IOVA as 'DC' 00:04:31.444 EAL: Bus vdev wants IOVA as 'DC' 00:04:31.444 EAL: Buses did not request a specific IOVA mode. 00:04:31.444 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:31.444 EAL: Selected IOVA mode 'VA' 00:04:31.444 EAL: Probing VFIO support... 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: IOMMU type 1 (Type 1) is supported 00:04:31.444 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:31.444 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:31.444 EAL: VFIO support initialized 00:04:31.444 EAL: Ask a virtual area of 0x2e000 bytes 00:04:31.444 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:31.444 EAL: Setting up physically contiguous memory... 00:04:31.444 EAL: Setting maximum number of open files to 524288 00:04:31.444 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:31.444 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:31.444 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:31.444 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.444 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:31.444 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.444 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.444 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:31.444 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:31.444 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.444 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:31.444 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.444 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.444 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:31.444 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:31.444 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.444 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:31.444 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.444 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.444 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:31.444 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:31.444 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.444 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:31.444 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.444 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.444 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:31.444 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:31.444 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:31.444 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.444 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:31.444 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:31.444 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.444 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:31.444 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:31.444 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.444 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:31.444 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:31.444 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.444 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:31.444 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:31.444 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.444 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:31.444 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:31.444 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.444 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:31.444 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:31.444 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.444 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:31.444 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:31.444 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.444 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:31.444 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:31.444 EAL: Hugepages will be freed exactly as allocated. 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: Refined arch frequency 2700000000 to measured frequency 2693513691 00:04:31.444 EAL: TSC frequency is ~2693500 KHz 00:04:31.444 EAL: Main lcore 0 is ready (tid=7f6653891a00;cpuset=[0]) 00:04:31.444 EAL: Trying to obtain current memory policy. 00:04:31.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.444 EAL: Restoring previous memory policy: 0 00:04:31.444 EAL: request: mp_malloc_sync 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: Heap on socket 0 was expanded by 2MB 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: Mem event callback 'spdk:(nil)' registered 00:04:31.444 00:04:31.444 00:04:31.444 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.444 http://cunit.sourceforge.net/ 00:04:31.444 00:04:31.444 00:04:31.444 Suite: components_suite 00:04:31.444 Test: vtophys_malloc_test ...passed 00:04:31.444 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:31.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.444 EAL: Restoring previous memory policy: 4 00:04:31.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.444 EAL: request: mp_malloc_sync 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: Heap on socket 0 was expanded by 4MB 00:04:31.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.444 EAL: request: mp_malloc_sync 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: Heap on socket 0 was shrunk by 4MB 00:04:31.444 EAL: Trying to obtain current memory policy. 00:04:31.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.444 EAL: Restoring previous memory policy: 4 00:04:31.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.444 EAL: request: mp_malloc_sync 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: Heap on socket 0 was expanded by 6MB 00:04:31.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.444 EAL: request: mp_malloc_sync 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: Heap on socket 0 was shrunk by 6MB 00:04:31.444 EAL: Trying to obtain current memory policy. 00:04:31.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.444 EAL: Restoring previous memory policy: 4 00:04:31.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.444 EAL: request: mp_malloc_sync 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: Heap on socket 0 was expanded by 10MB 00:04:31.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.444 EAL: request: mp_malloc_sync 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: Heap on socket 0 was shrunk by 10MB 00:04:31.444 EAL: Trying to obtain current memory policy. 00:04:31.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.444 EAL: Restoring previous memory policy: 4 00:04:31.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.444 EAL: request: mp_malloc_sync 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: Heap on socket 0 was expanded by 18MB 00:04:31.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.444 EAL: request: mp_malloc_sync 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: Heap on socket 0 was shrunk by 18MB 00:04:31.444 EAL: Trying to obtain current memory policy. 00:04:31.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.444 EAL: Restoring previous memory policy: 4 00:04:31.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.444 EAL: request: mp_malloc_sync 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: Heap on socket 0 was expanded by 34MB 00:04:31.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.444 EAL: request: mp_malloc_sync 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: Heap on socket 0 was shrunk by 34MB 00:04:31.444 EAL: Trying to obtain current memory policy. 00:04:31.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.444 EAL: Restoring previous memory policy: 4 00:04:31.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.444 EAL: request: mp_malloc_sync 00:04:31.444 EAL: No shared files mode enabled, IPC is disabled 00:04:31.444 EAL: Heap on socket 0 was expanded by 66MB 00:04:31.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.444 EAL: request: mp_malloc_sync 00:04:31.445 EAL: No shared files mode enabled, IPC is disabled 00:04:31.445 EAL: Heap on socket 0 was shrunk by 66MB 00:04:31.445 EAL: Trying to obtain current memory policy. 00:04:31.445 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.702 EAL: Restoring previous memory policy: 4 00:04:31.702 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.702 EAL: request: mp_malloc_sync 00:04:31.702 EAL: No shared files mode enabled, IPC is disabled 00:04:31.702 EAL: Heap on socket 0 was expanded by 130MB 00:04:31.702 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.702 EAL: request: mp_malloc_sync 00:04:31.702 EAL: No shared files mode enabled, IPC is disabled 00:04:31.703 EAL: Heap on socket 0 was shrunk by 130MB 00:04:31.703 EAL: Trying to obtain current memory policy. 00:04:31.703 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.703 EAL: Restoring previous memory policy: 4 00:04:31.703 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.703 EAL: request: mp_malloc_sync 00:04:31.703 EAL: No shared files mode enabled, IPC is disabled 00:04:31.703 EAL: Heap on socket 0 was expanded by 258MB 00:04:31.703 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.703 EAL: request: mp_malloc_sync 00:04:31.703 EAL: No shared files mode enabled, IPC is disabled 00:04:31.703 EAL: Heap on socket 0 was shrunk by 258MB 00:04:31.703 EAL: Trying to obtain current memory policy. 00:04:31.703 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.961 EAL: Restoring previous memory policy: 4 00:04:31.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.961 EAL: request: mp_malloc_sync 00:04:31.961 EAL: No shared files mode enabled, IPC is disabled 00:04:31.961 EAL: Heap on socket 0 was expanded by 514MB 00:04:31.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.218 EAL: request: mp_malloc_sync 00:04:32.218 EAL: No shared files mode enabled, IPC is disabled 00:04:32.218 EAL: Heap on socket 0 was shrunk by 514MB 00:04:32.218 EAL: Trying to obtain current memory policy. 00:04:32.218 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.477 EAL: Restoring previous memory policy: 4 00:04:32.477 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.477 EAL: request: mp_malloc_sync 00:04:32.477 EAL: No shared files mode enabled, IPC is disabled 00:04:32.477 EAL: Heap on socket 0 was expanded by 1026MB 00:04:32.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.994 EAL: request: mp_malloc_sync 00:04:32.994 EAL: No shared files mode enabled, IPC is disabled 00:04:32.994 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:32.994 passed 00:04:32.994 00:04:32.994 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.994 suites 1 1 n/a 0 0 00:04:32.994 tests 2 2 2 0 0 00:04:32.994 asserts 497 497 497 0 n/a 00:04:32.994 00:04:32.994 Elapsed time = 1.398 seconds 00:04:32.994 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.994 EAL: request: mp_malloc_sync 00:04:32.994 EAL: No shared files mode enabled, IPC is disabled 00:04:32.994 EAL: Heap on socket 0 was shrunk by 2MB 00:04:32.994 EAL: No shared files mode enabled, IPC is disabled 00:04:32.994 EAL: No shared files mode enabled, IPC is disabled 00:04:32.994 EAL: No shared files mode enabled, IPC is disabled 00:04:32.994 00:04:32.994 real 0m1.630s 00:04:32.994 user 0m0.884s 00:04:32.994 sys 0m0.605s 00:04:32.994 04:40:23 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.994 04:40:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:32.994 ************************************ 00:04:32.994 END TEST env_vtophys 00:04:32.994 ************************************ 00:04:32.994 04:40:23 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:32.994 04:40:23 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.994 04:40:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.994 04:40:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.994 ************************************ 00:04:32.994 START TEST env_pci 00:04:32.994 ************************************ 00:04:32.994 04:40:23 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:32.994 00:04:32.994 00:04:32.994 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.994 http://cunit.sourceforge.net/ 00:04:32.994 00:04:32.994 00:04:32.994 Suite: pci 00:04:32.994 Test: pci_hook ...[2024-10-28 04:40:23.449724] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2174714 has claimed it 00:04:32.994 EAL: Cannot find device (10000:00:01.0) 00:04:32.994 EAL: Failed to attach device on primary process 00:04:32.994 passed 00:04:32.994 00:04:32.994 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.994 suites 1 1 n/a 0 0 00:04:32.994 tests 1 1 1 0 0 00:04:32.994 asserts 25 25 25 0 n/a 00:04:32.994 00:04:32.994 Elapsed time = 0.019 seconds 00:04:32.994 00:04:32.994 real 0m0.031s 00:04:32.994 user 0m0.010s 00:04:32.994 sys 0m0.021s 00:04:32.994 04:40:23 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.994 04:40:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:32.994 ************************************ 00:04:32.994 END TEST env_pci 00:04:32.994 ************************************ 00:04:32.994 04:40:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:32.994 04:40:23 env -- env/env.sh@15 -- # uname 00:04:32.994 04:40:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:32.994 04:40:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:32.994 04:40:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:32.994 04:40:23 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:32.994 04:40:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.994 04:40:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.994 ************************************ 00:04:32.994 START TEST env_dpdk_post_init 00:04:32.994 ************************************ 00:04:32.994 04:40:23 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:32.994 EAL: Detected CPU lcores: 48 00:04:32.994 EAL: Detected NUMA nodes: 2 00:04:32.994 EAL: Detected shared linkage of DPDK 00:04:32.994 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.994 EAL: Selected IOVA mode 'VA' 00:04:32.994 EAL: VFIO support initialized 00:04:33.253 EAL: Using IOMMU type 1 (Type 1) 00:04:38.522 Starting DPDK initialization... 00:04:38.522 Starting SPDK post initialization... 00:04:38.522 SPDK NVMe probe 00:04:38.522 Attaching to 0000:88:00.0 00:04:38.522 Attached to 0000:88:00.0 00:04:38.522 Cleaning up... 00:04:38.522 00:04:38.522 real 0m4.532s 00:04:38.522 user 0m3.048s 00:04:38.522 sys 0m0.444s 00:04:38.522 04:40:28 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.522 04:40:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.522 ************************************ 00:04:38.522 END TEST env_dpdk_post_init 00:04:38.522 ************************************ 00:04:38.522 04:40:28 env -- env/env.sh@26 -- # uname 00:04:38.522 04:40:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:38.522 04:40:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.522 04:40:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.522 04:40:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.522 04:40:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.522 ************************************ 00:04:38.522 START TEST env_mem_callbacks 00:04:38.522 ************************************ 00:04:38.522 04:40:28 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.522 EAL: Detected CPU lcores: 48 00:04:38.522 EAL: Detected NUMA nodes: 2 00:04:38.522 EAL: Detected shared linkage of DPDK 00:04:38.522 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.522 EAL: Selected IOVA mode 'VA' 00:04:38.522 EAL: VFIO support initialized 00:04:38.522 00:04:38.522 00:04:38.522 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.522 http://cunit.sourceforge.net/ 00:04:38.522 00:04:38.522 00:04:38.522 Suite: memory 00:04:38.522 Test: test ... 00:04:38.522 register 0x200000200000 2097152 00:04:38.522 malloc 3145728 00:04:38.522 register 0x200000400000 4194304 00:04:38.522 buf 0x200000500000 len 3145728 PASSED 00:04:38.522 malloc 64 00:04:38.522 buf 0x2000004fff40 len 64 PASSED 00:04:38.522 malloc 4194304 00:04:38.522 register 0x200000800000 6291456 00:04:38.522 buf 0x200000a00000 len 4194304 PASSED 00:04:38.522 free 0x200000500000 3145728 00:04:38.522 free 0x2000004fff40 64 00:04:38.522 unregister 0x200000400000 4194304 PASSED 00:04:38.522 free 0x200000a00000 4194304 00:04:38.522 unregister 0x200000800000 6291456 PASSED 00:04:38.522 malloc 8388608 00:04:38.522 register 0x200000400000 10485760 00:04:38.522 buf 0x200000600000 len 8388608 PASSED 00:04:38.522 free 0x200000600000 8388608 00:04:38.522 unregister 0x200000400000 10485760 PASSED 00:04:38.522 passed 00:04:38.522 00:04:38.522 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.522 suites 1 1 n/a 0 0 00:04:38.522 tests 1 1 1 0 0 00:04:38.522 asserts 15 15 15 0 n/a 00:04:38.522 00:04:38.522 Elapsed time = 0.005 seconds 00:04:38.522 00:04:38.522 real 0m0.147s 00:04:38.522 user 0m0.011s 00:04:38.522 sys 0m0.036s 00:04:38.522 04:40:28 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.522 04:40:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:38.522 ************************************ 00:04:38.522 END TEST env_mem_callbacks 00:04:38.522 ************************************ 00:04:38.522 00:04:38.522 real 0m6.887s 00:04:38.522 user 0m4.287s 00:04:38.522 sys 0m1.337s 00:04:38.522 04:40:28 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.522 04:40:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.522 ************************************ 00:04:38.522 END TEST env 00:04:38.522 ************************************ 00:04:38.522 04:40:28 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:38.522 04:40:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.522 04:40:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.522 04:40:28 -- common/autotest_common.sh@10 -- # set +x 00:04:38.522 ************************************ 00:04:38.522 START TEST rpc 00:04:38.522 ************************************ 00:04:38.522 04:40:28 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:38.522 * Looking for test storage... 00:04:38.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:38.522 04:40:28 rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:38.522 04:40:28 rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:04:38.522 04:40:28 rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:38.522 04:40:28 rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:38.522 04:40:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.522 04:40:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.522 04:40:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.522 04:40:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.522 04:40:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.522 04:40:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.522 04:40:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.522 04:40:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.522 04:40:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.522 04:40:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.522 04:40:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.522 04:40:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:38.522 04:40:28 rpc -- scripts/common.sh@345 -- # : 1 00:04:38.522 04:40:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.522 04:40:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.522 04:40:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:38.522 04:40:28 rpc -- scripts/common.sh@353 -- # local d=1 00:04:38.522 04:40:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.522 04:40:28 rpc -- scripts/common.sh@355 -- # echo 1 00:04:38.522 04:40:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.522 04:40:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:38.522 04:40:28 rpc -- scripts/common.sh@353 -- # local d=2 00:04:38.522 04:40:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.522 04:40:28 rpc -- scripts/common.sh@355 -- # echo 2 00:04:38.522 04:40:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.522 04:40:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.522 04:40:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.522 04:40:28 rpc -- scripts/common.sh@368 -- # return 0 00:04:38.522 04:40:28 rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.522 04:40:28 rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:38.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.522 --rc genhtml_branch_coverage=1 00:04:38.522 --rc genhtml_function_coverage=1 00:04:38.522 --rc genhtml_legend=1 00:04:38.522 --rc geninfo_all_blocks=1 00:04:38.522 --rc geninfo_unexecuted_blocks=1 00:04:38.522 00:04:38.522 ' 00:04:38.522 04:40:28 rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:38.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.522 --rc genhtml_branch_coverage=1 00:04:38.522 --rc genhtml_function_coverage=1 00:04:38.522 --rc genhtml_legend=1 00:04:38.523 --rc geninfo_all_blocks=1 00:04:38.523 --rc geninfo_unexecuted_blocks=1 00:04:38.523 00:04:38.523 ' 00:04:38.523 04:40:28 rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:38.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.523 --rc genhtml_branch_coverage=1 00:04:38.523 --rc genhtml_function_coverage=1 00:04:38.523 --rc genhtml_legend=1 00:04:38.523 --rc geninfo_all_blocks=1 00:04:38.523 --rc geninfo_unexecuted_blocks=1 00:04:38.523 00:04:38.523 ' 00:04:38.523 04:40:28 rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:38.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.523 --rc genhtml_branch_coverage=1 00:04:38.523 --rc genhtml_function_coverage=1 00:04:38.523 --rc genhtml_legend=1 00:04:38.523 --rc geninfo_all_blocks=1 00:04:38.523 --rc geninfo_unexecuted_blocks=1 00:04:38.523 00:04:38.523 ' 00:04:38.523 04:40:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2175481 00:04:38.523 04:40:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:38.523 04:40:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.523 04:40:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2175481 00:04:38.523 04:40:28 rpc -- common/autotest_common.sh@831 -- # '[' -z 2175481 ']' 00:04:38.523 04:40:28 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.523 04:40:28 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.523 04:40:28 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.523 04:40:28 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.523 04:40:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.523 [2024-10-28 04:40:28.520591] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:04:38.523 [2024-10-28 04:40:28.520698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2175481 ] 00:04:38.523 [2024-10-28 04:40:28.652322] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:38.523 [2024-10-28 04:40:28.687303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.523 [2024-10-28 04:40:28.732830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:38.523 [2024-10-28 04:40:28.732885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2175481' to capture a snapshot of events at runtime. 00:04:38.523 [2024-10-28 04:40:28.732899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:38.523 [2024-10-28 04:40:28.732926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:38.523 [2024-10-28 04:40:28.732936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2175481 for offline analysis/debug. 00:04:38.523 [2024-10-28 04:40:28.733502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.090 04:40:29 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.090 04:40:29 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:39.090 04:40:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.090 04:40:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.090 04:40:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:39.090 04:40:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:39.090 04:40:29 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.090 04:40:29 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.090 04:40:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.090 ************************************ 00:04:39.090 START TEST rpc_integrity 00:04:39.090 ************************************ 00:04:39.090 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:39.090 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.090 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.090 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.090 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.090 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.090 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.090 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.090 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.090 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.090 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.090 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.090 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:39.090 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.090 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.090 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.090 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.090 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.090 { 00:04:39.090 "name": "Malloc0", 00:04:39.090 "aliases": [ 00:04:39.090 "575e022f-d686-4b8b-83f6-255a8ae077d6" 00:04:39.090 ], 00:04:39.090 "product_name": "Malloc disk", 00:04:39.090 "block_size": 512, 00:04:39.090 "num_blocks": 16384, 00:04:39.090 "uuid": "575e022f-d686-4b8b-83f6-255a8ae077d6", 00:04:39.090 "assigned_rate_limits": { 00:04:39.090 "rw_ios_per_sec": 0, 00:04:39.090 "rw_mbytes_per_sec": 0, 00:04:39.090 "r_mbytes_per_sec": 0, 00:04:39.090 "w_mbytes_per_sec": 0 00:04:39.090 }, 00:04:39.090 "claimed": false, 00:04:39.090 "zoned": false, 00:04:39.090 "supported_io_types": { 00:04:39.090 "read": true, 00:04:39.090 "write": true, 00:04:39.090 "unmap": true, 00:04:39.090 "flush": true, 00:04:39.090 "reset": true, 00:04:39.090 "nvme_admin": false, 00:04:39.090 "nvme_io": false, 00:04:39.090 "nvme_io_md": false, 00:04:39.090 "write_zeroes": true, 00:04:39.090 "zcopy": true, 00:04:39.090 "get_zone_info": false, 00:04:39.090 "zone_management": false, 00:04:39.090 "zone_append": false, 00:04:39.090 "compare": false, 00:04:39.090 "compare_and_write": false, 00:04:39.090 "abort": true, 00:04:39.090 "seek_hole": false, 00:04:39.090 "seek_data": false, 00:04:39.090 "copy": true, 00:04:39.091 "nvme_iov_md": false 00:04:39.091 }, 00:04:39.091 "memory_domains": [ 00:04:39.091 { 00:04:39.091 "dma_device_id": "system", 00:04:39.091 "dma_device_type": 1 00:04:39.091 }, 00:04:39.091 { 00:04:39.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.091 "dma_device_type": 2 00:04:39.091 } 00:04:39.091 ], 00:04:39.091 "driver_specific": {} 00:04:39.091 } 00:04:39.091 ]' 00:04:39.091 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.091 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.091 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:39.091 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.091 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.091 [2024-10-28 04:40:29.641223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:39.091 [2024-10-28 04:40:29.641267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.091 [2024-10-28 04:40:29.641292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f0ea30 00:04:39.091 [2024-10-28 04:40:29.641307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.091 [2024-10-28 04:40:29.642851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.091 [2024-10-28 04:40:29.642878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.091 Passthru0 00:04:39.091 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.091 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.091 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.091 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.091 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.091 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.091 { 00:04:39.091 "name": "Malloc0", 00:04:39.091 "aliases": [ 00:04:39.091 "575e022f-d686-4b8b-83f6-255a8ae077d6" 00:04:39.091 ], 00:04:39.091 "product_name": "Malloc disk", 00:04:39.091 "block_size": 512, 00:04:39.091 "num_blocks": 16384, 00:04:39.091 "uuid": "575e022f-d686-4b8b-83f6-255a8ae077d6", 00:04:39.091 "assigned_rate_limits": { 00:04:39.091 "rw_ios_per_sec": 0, 00:04:39.091 "rw_mbytes_per_sec": 0, 00:04:39.091 "r_mbytes_per_sec": 0, 00:04:39.091 "w_mbytes_per_sec": 0 00:04:39.091 }, 00:04:39.091 "claimed": true, 00:04:39.091 "claim_type": "exclusive_write", 00:04:39.091 "zoned": false, 00:04:39.091 "supported_io_types": { 00:04:39.091 "read": true, 00:04:39.091 "write": true, 00:04:39.091 "unmap": true, 00:04:39.091 "flush": true, 00:04:39.091 "reset": true, 00:04:39.091 "nvme_admin": false, 00:04:39.091 "nvme_io": false, 00:04:39.091 "nvme_io_md": false, 00:04:39.091 "write_zeroes": true, 00:04:39.091 "zcopy": true, 00:04:39.091 "get_zone_info": false, 00:04:39.091 "zone_management": false, 00:04:39.091 "zone_append": false, 00:04:39.091 "compare": false, 00:04:39.091 "compare_and_write": false, 00:04:39.091 "abort": true, 00:04:39.091 "seek_hole": false, 00:04:39.091 "seek_data": false, 00:04:39.091 "copy": true, 00:04:39.091 "nvme_iov_md": false 00:04:39.091 }, 00:04:39.091 "memory_domains": [ 00:04:39.091 { 00:04:39.091 "dma_device_id": "system", 00:04:39.091 "dma_device_type": 1 00:04:39.091 }, 00:04:39.091 { 00:04:39.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.091 "dma_device_type": 2 00:04:39.091 } 00:04:39.091 ], 00:04:39.091 "driver_specific": {} 00:04:39.091 }, 00:04:39.091 { 00:04:39.091 "name": "Passthru0", 00:04:39.091 "aliases": [ 00:04:39.091 "efbea46a-3ce2-59bb-b3b3-1011995fee09" 00:04:39.091 ], 00:04:39.091 "product_name": "passthru", 00:04:39.091 "block_size": 512, 00:04:39.091 "num_blocks": 16384, 00:04:39.091 "uuid": "efbea46a-3ce2-59bb-b3b3-1011995fee09", 00:04:39.091 "assigned_rate_limits": { 00:04:39.091 "rw_ios_per_sec": 0, 00:04:39.091 "rw_mbytes_per_sec": 0, 00:04:39.091 "r_mbytes_per_sec": 0, 00:04:39.091 "w_mbytes_per_sec": 0 00:04:39.091 }, 00:04:39.091 "claimed": false, 00:04:39.091 "zoned": false, 00:04:39.091 "supported_io_types": { 00:04:39.091 "read": true, 00:04:39.091 "write": true, 00:04:39.091 "unmap": true, 00:04:39.091 "flush": true, 00:04:39.091 "reset": true, 00:04:39.091 "nvme_admin": false, 00:04:39.091 "nvme_io": false, 00:04:39.091 "nvme_io_md": false, 00:04:39.091 "write_zeroes": true, 00:04:39.091 "zcopy": true, 00:04:39.091 "get_zone_info": false, 00:04:39.091 "zone_management": false, 00:04:39.091 "zone_append": false, 00:04:39.091 "compare": false, 00:04:39.091 "compare_and_write": false, 00:04:39.091 "abort": true, 00:04:39.091 "seek_hole": false, 00:04:39.091 "seek_data": false, 00:04:39.091 "copy": true, 00:04:39.091 "nvme_iov_md": false 00:04:39.091 }, 00:04:39.091 "memory_domains": [ 00:04:39.091 { 00:04:39.091 "dma_device_id": "system", 00:04:39.091 "dma_device_type": 1 00:04:39.091 }, 00:04:39.091 { 00:04:39.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.091 "dma_device_type": 2 00:04:39.091 } 00:04:39.091 ], 00:04:39.091 "driver_specific": { 00:04:39.091 "passthru": { 00:04:39.091 "name": "Passthru0", 00:04:39.091 "base_bdev_name": "Malloc0" 00:04:39.091 } 00:04:39.091 } 00:04:39.091 } 00:04:39.091 ]' 00:04:39.091 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.349 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.349 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.349 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.349 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.349 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.349 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:39.349 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.349 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.349 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.349 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:39.349 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.349 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.349 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.349 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:39.349 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:39.349 04:40:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:39.349 00:04:39.349 real 0m0.220s 00:04:39.349 user 0m0.147s 00:04:39.349 sys 0m0.020s 00:04:39.349 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.349 04:40:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.349 ************************************ 00:04:39.349 END TEST rpc_integrity 00:04:39.349 ************************************ 00:04:39.349 04:40:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:39.349 04:40:29 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.349 04:40:29 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.349 04:40:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.349 ************************************ 00:04:39.349 START TEST rpc_plugins 00:04:39.349 ************************************ 00:04:39.349 04:40:29 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:39.349 04:40:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:39.349 04:40:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.349 04:40:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.349 04:40:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.349 04:40:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:39.349 04:40:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:39.349 04:40:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.349 04:40:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.349 04:40:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.349 04:40:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:39.349 { 00:04:39.349 "name": "Malloc1", 00:04:39.349 "aliases": [ 00:04:39.349 "748a5194-8500-41f9-b802-24f6aada19ff" 00:04:39.349 ], 00:04:39.349 "product_name": "Malloc disk", 00:04:39.349 "block_size": 4096, 00:04:39.349 "num_blocks": 256, 00:04:39.349 "uuid": "748a5194-8500-41f9-b802-24f6aada19ff", 00:04:39.349 "assigned_rate_limits": { 00:04:39.349 "rw_ios_per_sec": 0, 00:04:39.349 "rw_mbytes_per_sec": 0, 00:04:39.349 "r_mbytes_per_sec": 0, 00:04:39.349 "w_mbytes_per_sec": 0 00:04:39.349 }, 00:04:39.349 "claimed": false, 00:04:39.349 "zoned": false, 00:04:39.349 "supported_io_types": { 00:04:39.349 "read": true, 00:04:39.349 "write": true, 00:04:39.349 "unmap": true, 00:04:39.349 "flush": true, 00:04:39.349 "reset": true, 00:04:39.349 "nvme_admin": false, 00:04:39.349 "nvme_io": false, 00:04:39.349 "nvme_io_md": false, 00:04:39.349 "write_zeroes": true, 00:04:39.349 "zcopy": true, 00:04:39.349 "get_zone_info": false, 00:04:39.349 "zone_management": false, 00:04:39.349 "zone_append": false, 00:04:39.349 "compare": false, 00:04:39.349 "compare_and_write": false, 00:04:39.349 "abort": true, 00:04:39.349 "seek_hole": false, 00:04:39.349 "seek_data": false, 00:04:39.349 "copy": true, 00:04:39.349 "nvme_iov_md": false 00:04:39.349 }, 00:04:39.349 "memory_domains": [ 00:04:39.349 { 00:04:39.349 "dma_device_id": "system", 00:04:39.349 "dma_device_type": 1 00:04:39.349 }, 00:04:39.349 { 00:04:39.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.349 "dma_device_type": 2 00:04:39.349 } 00:04:39.349 ], 00:04:39.349 "driver_specific": {} 00:04:39.349 } 00:04:39.349 ]' 00:04:39.349 04:40:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:39.349 04:40:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:39.349 04:40:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:39.349 04:40:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.349 04:40:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.349 04:40:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.349 04:40:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:39.350 04:40:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.350 04:40:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.350 04:40:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.350 04:40:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:39.350 04:40:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:39.350 04:40:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:39.350 00:04:39.350 real 0m0.113s 00:04:39.350 user 0m0.075s 00:04:39.350 sys 0m0.008s 00:04:39.350 04:40:29 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.350 04:40:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.350 ************************************ 00:04:39.350 END TEST rpc_plugins 00:04:39.350 ************************************ 00:04:39.350 04:40:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:39.350 04:40:29 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.350 04:40:29 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.350 04:40:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.607 ************************************ 00:04:39.607 START TEST rpc_trace_cmd_test 00:04:39.607 ************************************ 00:04:39.607 04:40:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:39.607 04:40:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:39.607 04:40:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:39.607 04:40:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.607 04:40:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.607 04:40:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.607 04:40:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:39.607 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2175481", 00:04:39.607 "tpoint_group_mask": "0x8", 00:04:39.607 "iscsi_conn": { 00:04:39.607 "mask": "0x2", 00:04:39.607 "tpoint_mask": "0x0" 00:04:39.607 }, 00:04:39.607 "scsi": { 00:04:39.607 "mask": "0x4", 00:04:39.607 "tpoint_mask": "0x0" 00:04:39.607 }, 00:04:39.607 "bdev": { 00:04:39.607 "mask": "0x8", 00:04:39.607 "tpoint_mask": "0xffffffffffffffff" 00:04:39.607 }, 00:04:39.607 "nvmf_rdma": { 00:04:39.607 "mask": "0x10", 00:04:39.607 "tpoint_mask": "0x0" 00:04:39.607 }, 00:04:39.607 "nvmf_tcp": { 00:04:39.607 "mask": "0x20", 00:04:39.607 "tpoint_mask": "0x0" 00:04:39.607 }, 00:04:39.607 "ftl": { 00:04:39.607 "mask": "0x40", 00:04:39.607 "tpoint_mask": "0x0" 00:04:39.607 }, 00:04:39.607 "blobfs": { 00:04:39.607 "mask": "0x80", 00:04:39.607 "tpoint_mask": "0x0" 00:04:39.607 }, 00:04:39.607 "dsa": { 00:04:39.607 "mask": "0x200", 00:04:39.607 "tpoint_mask": "0x0" 00:04:39.607 }, 00:04:39.607 "thread": { 00:04:39.607 "mask": "0x400", 00:04:39.607 "tpoint_mask": "0x0" 00:04:39.607 }, 00:04:39.607 "nvme_pcie": { 00:04:39.607 "mask": "0x800", 00:04:39.607 "tpoint_mask": "0x0" 00:04:39.607 }, 00:04:39.607 "iaa": { 00:04:39.607 "mask": "0x1000", 00:04:39.607 "tpoint_mask": "0x0" 00:04:39.607 }, 00:04:39.607 "nvme_tcp": { 00:04:39.607 "mask": "0x2000", 00:04:39.607 "tpoint_mask": "0x0" 00:04:39.607 }, 00:04:39.607 "bdev_nvme": { 00:04:39.607 "mask": "0x4000", 00:04:39.607 "tpoint_mask": "0x0" 00:04:39.607 }, 00:04:39.607 "sock": { 00:04:39.607 "mask": "0x8000", 00:04:39.607 "tpoint_mask": "0x0" 00:04:39.607 }, 00:04:39.607 "blob": { 00:04:39.607 "mask": "0x10000", 00:04:39.607 "tpoint_mask": "0x0" 00:04:39.607 }, 00:04:39.607 "bdev_raid": { 00:04:39.607 "mask": "0x20000", 00:04:39.607 "tpoint_mask": "0x0" 00:04:39.607 }, 00:04:39.607 "scheduler": { 00:04:39.607 "mask": "0x40000", 00:04:39.607 "tpoint_mask": "0x0" 00:04:39.607 } 00:04:39.607 }' 00:04:39.607 04:40:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:39.607 04:40:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:39.607 04:40:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:39.607 04:40:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:39.607 04:40:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:39.607 04:40:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:39.607 04:40:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:39.607 04:40:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:39.607 04:40:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:39.607 04:40:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:39.607 00:04:39.607 real 0m0.201s 00:04:39.607 user 0m0.177s 00:04:39.607 sys 0m0.016s 00:04:39.607 04:40:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.607 04:40:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.607 ************************************ 00:04:39.607 END TEST rpc_trace_cmd_test 00:04:39.607 ************************************ 00:04:39.607 04:40:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:39.607 04:40:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:39.607 04:40:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:39.607 04:40:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.607 04:40:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.607 04:40:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.939 ************************************ 00:04:39.939 START TEST rpc_daemon_integrity 00:04:39.939 ************************************ 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.939 { 00:04:39.939 "name": "Malloc2", 00:04:39.939 "aliases": [ 00:04:39.939 "4a1ea796-d109-4afa-93b0-d5a5cb826b36" 00:04:39.939 ], 00:04:39.939 "product_name": "Malloc disk", 00:04:39.939 "block_size": 512, 00:04:39.939 "num_blocks": 16384, 00:04:39.939 "uuid": "4a1ea796-d109-4afa-93b0-d5a5cb826b36", 00:04:39.939 "assigned_rate_limits": { 00:04:39.939 "rw_ios_per_sec": 0, 00:04:39.939 "rw_mbytes_per_sec": 0, 00:04:39.939 "r_mbytes_per_sec": 0, 00:04:39.939 "w_mbytes_per_sec": 0 00:04:39.939 }, 00:04:39.939 "claimed": false, 00:04:39.939 "zoned": false, 00:04:39.939 "supported_io_types": { 00:04:39.939 "read": true, 00:04:39.939 "write": true, 00:04:39.939 "unmap": true, 00:04:39.939 "flush": true, 00:04:39.939 "reset": true, 00:04:39.939 "nvme_admin": false, 00:04:39.939 "nvme_io": false, 00:04:39.939 "nvme_io_md": false, 00:04:39.939 "write_zeroes": true, 00:04:39.939 "zcopy": true, 00:04:39.939 "get_zone_info": false, 00:04:39.939 "zone_management": false, 00:04:39.939 "zone_append": false, 00:04:39.939 "compare": false, 00:04:39.939 "compare_and_write": false, 00:04:39.939 "abort": true, 00:04:39.939 "seek_hole": false, 00:04:39.939 "seek_data": false, 00:04:39.939 "copy": true, 00:04:39.939 "nvme_iov_md": false 00:04:39.939 }, 00:04:39.939 "memory_domains": [ 00:04:39.939 { 00:04:39.939 "dma_device_id": "system", 00:04:39.939 "dma_device_type": 1 00:04:39.939 }, 00:04:39.939 { 00:04:39.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.939 "dma_device_type": 2 00:04:39.939 } 00:04:39.939 ], 00:04:39.939 "driver_specific": {} 00:04:39.939 } 00:04:39.939 ]' 00:04:39.939 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.940 [2024-10-28 04:40:30.322486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:39.940 [2024-10-28 04:40:30.322530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.940 [2024-10-28 04:40:30.322554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f11a80 00:04:39.940 [2024-10-28 04:40:30.322570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.940 [2024-10-28 04:40:30.324145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.940 [2024-10-28 04:40:30.324175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.940 Passthru0 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.940 { 00:04:39.940 "name": "Malloc2", 00:04:39.940 "aliases": [ 00:04:39.940 "4a1ea796-d109-4afa-93b0-d5a5cb826b36" 00:04:39.940 ], 00:04:39.940 "product_name": "Malloc disk", 00:04:39.940 "block_size": 512, 00:04:39.940 "num_blocks": 16384, 00:04:39.940 "uuid": "4a1ea796-d109-4afa-93b0-d5a5cb826b36", 00:04:39.940 "assigned_rate_limits": { 00:04:39.940 "rw_ios_per_sec": 0, 00:04:39.940 "rw_mbytes_per_sec": 0, 00:04:39.940 "r_mbytes_per_sec": 0, 00:04:39.940 "w_mbytes_per_sec": 0 00:04:39.940 }, 00:04:39.940 "claimed": true, 00:04:39.940 "claim_type": "exclusive_write", 00:04:39.940 "zoned": false, 00:04:39.940 "supported_io_types": { 00:04:39.940 "read": true, 00:04:39.940 "write": true, 00:04:39.940 "unmap": true, 00:04:39.940 "flush": true, 00:04:39.940 "reset": true, 00:04:39.940 "nvme_admin": false, 00:04:39.940 "nvme_io": false, 00:04:39.940 "nvme_io_md": false, 00:04:39.940 "write_zeroes": true, 00:04:39.940 "zcopy": true, 00:04:39.940 "get_zone_info": false, 00:04:39.940 "zone_management": false, 00:04:39.940 "zone_append": false, 00:04:39.940 "compare": false, 00:04:39.940 "compare_and_write": false, 00:04:39.940 "abort": true, 00:04:39.940 "seek_hole": false, 00:04:39.940 "seek_data": false, 00:04:39.940 "copy": true, 00:04:39.940 "nvme_iov_md": false 00:04:39.940 }, 00:04:39.940 "memory_domains": [ 00:04:39.940 { 00:04:39.940 "dma_device_id": "system", 00:04:39.940 "dma_device_type": 1 00:04:39.940 }, 00:04:39.940 { 00:04:39.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.940 "dma_device_type": 2 00:04:39.940 } 00:04:39.940 ], 00:04:39.940 "driver_specific": {} 00:04:39.940 }, 00:04:39.940 { 00:04:39.940 "name": "Passthru0", 00:04:39.940 "aliases": [ 00:04:39.940 "7d2f5cd4-ac7e-54e2-a465-80ccae9164de" 00:04:39.940 ], 00:04:39.940 "product_name": "passthru", 00:04:39.940 "block_size": 512, 00:04:39.940 "num_blocks": 16384, 00:04:39.940 "uuid": "7d2f5cd4-ac7e-54e2-a465-80ccae9164de", 00:04:39.940 "assigned_rate_limits": { 00:04:39.940 "rw_ios_per_sec": 0, 00:04:39.940 "rw_mbytes_per_sec": 0, 00:04:39.940 "r_mbytes_per_sec": 0, 00:04:39.940 "w_mbytes_per_sec": 0 00:04:39.940 }, 00:04:39.940 "claimed": false, 00:04:39.940 "zoned": false, 00:04:39.940 "supported_io_types": { 00:04:39.940 "read": true, 00:04:39.940 "write": true, 00:04:39.940 "unmap": true, 00:04:39.940 "flush": true, 00:04:39.940 "reset": true, 00:04:39.940 "nvme_admin": false, 00:04:39.940 "nvme_io": false, 00:04:39.940 "nvme_io_md": false, 00:04:39.940 "write_zeroes": true, 00:04:39.940 "zcopy": true, 00:04:39.940 "get_zone_info": false, 00:04:39.940 "zone_management": false, 00:04:39.940 "zone_append": false, 00:04:39.940 "compare": false, 00:04:39.940 "compare_and_write": false, 00:04:39.940 "abort": true, 00:04:39.940 "seek_hole": false, 00:04:39.940 "seek_data": false, 00:04:39.940 "copy": true, 00:04:39.940 "nvme_iov_md": false 00:04:39.940 }, 00:04:39.940 "memory_domains": [ 00:04:39.940 { 00:04:39.940 "dma_device_id": "system", 00:04:39.940 "dma_device_type": 1 00:04:39.940 }, 00:04:39.940 { 00:04:39.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.940 "dma_device_type": 2 00:04:39.940 } 00:04:39.940 ], 00:04:39.940 "driver_specific": { 00:04:39.940 "passthru": { 00:04:39.940 "name": "Passthru0", 00:04:39.940 "base_bdev_name": "Malloc2" 00:04:39.940 } 00:04:39.940 } 00:04:39.940 } 00:04:39.940 ]' 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:39.940 00:04:39.940 real 0m0.234s 00:04:39.940 user 0m0.160s 00:04:39.940 sys 0m0.017s 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.940 04:40:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.940 ************************************ 00:04:39.940 END TEST rpc_daemon_integrity 00:04:39.940 ************************************ 00:04:39.940 04:40:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:39.940 04:40:30 rpc -- rpc/rpc.sh@84 -- # killprocess 2175481 00:04:39.940 04:40:30 rpc -- common/autotest_common.sh@950 -- # '[' -z 2175481 ']' 00:04:39.940 04:40:30 rpc -- common/autotest_common.sh@954 -- # kill -0 2175481 00:04:39.940 04:40:30 rpc -- common/autotest_common.sh@955 -- # uname 00:04:39.940 04:40:30 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:39.940 04:40:30 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2175481 00:04:40.239 04:40:30 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:40.239 04:40:30 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:40.239 04:40:30 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2175481' 00:04:40.239 killing process with pid 2175481 00:04:40.239 04:40:30 rpc -- common/autotest_common.sh@969 -- # kill 2175481 00:04:40.239 04:40:30 rpc -- common/autotest_common.sh@974 -- # wait 2175481 00:04:40.498 00:04:40.498 real 0m2.572s 00:04:40.498 user 0m3.201s 00:04:40.498 sys 0m0.642s 00:04:40.498 04:40:30 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.498 04:40:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.498 ************************************ 00:04:40.498 END TEST rpc 00:04:40.498 ************************************ 00:04:40.498 04:40:30 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:40.498 04:40:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.498 04:40:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.498 04:40:30 -- common/autotest_common.sh@10 -- # set +x 00:04:40.498 ************************************ 00:04:40.498 START TEST skip_rpc 00:04:40.498 ************************************ 00:04:40.498 04:40:30 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:40.498 * Looking for test storage... 00:04:40.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:40.498 04:40:30 skip_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:40.498 04:40:30 skip_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:04:40.498 04:40:30 skip_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:40.498 04:40:31 skip_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.498 04:40:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.498 04:40:31 skip_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.498 04:40:31 skip_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:40.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.498 --rc genhtml_branch_coverage=1 00:04:40.498 --rc genhtml_function_coverage=1 00:04:40.498 --rc genhtml_legend=1 00:04:40.498 --rc geninfo_all_blocks=1 00:04:40.498 --rc geninfo_unexecuted_blocks=1 00:04:40.498 00:04:40.498 ' 00:04:40.498 04:40:31 skip_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:40.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.498 --rc genhtml_branch_coverage=1 00:04:40.498 --rc genhtml_function_coverage=1 00:04:40.498 --rc genhtml_legend=1 00:04:40.498 --rc geninfo_all_blocks=1 00:04:40.498 --rc geninfo_unexecuted_blocks=1 00:04:40.498 00:04:40.498 ' 00:04:40.498 04:40:31 skip_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:40.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.498 --rc genhtml_branch_coverage=1 00:04:40.498 --rc genhtml_function_coverage=1 00:04:40.498 --rc genhtml_legend=1 00:04:40.498 --rc geninfo_all_blocks=1 00:04:40.498 --rc geninfo_unexecuted_blocks=1 00:04:40.498 00:04:40.498 ' 00:04:40.498 04:40:31 skip_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:40.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.498 --rc genhtml_branch_coverage=1 00:04:40.498 --rc genhtml_function_coverage=1 00:04:40.498 --rc genhtml_legend=1 00:04:40.498 --rc geninfo_all_blocks=1 00:04:40.498 --rc geninfo_unexecuted_blocks=1 00:04:40.498 00:04:40.498 ' 00:04:40.498 04:40:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:40.498 04:40:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:40.498 04:40:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:40.498 04:40:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.498 04:40:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.498 04:40:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.757 ************************************ 00:04:40.757 START TEST skip_rpc 00:04:40.757 ************************************ 00:04:40.757 04:40:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:40.757 04:40:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2175932 00:04:40.757 04:40:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:40.757 04:40:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.757 04:40:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:40.757 [2024-10-28 04:40:31.161388] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:04:40.757 [2024-10-28 04:40:31.161467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2175932 ] 00:04:40.757 [2024-10-28 04:40:31.291932] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:40.757 [2024-10-28 04:40:31.334080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.016 [2024-10-28 04:40:31.388087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2175932 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2175932 ']' 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2175932 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2175932 00:04:46.276 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.277 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.277 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2175932' 00:04:46.277 killing process with pid 2175932 00:04:46.277 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2175932 00:04:46.277 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2175932 00:04:46.277 00:04:46.277 real 0m5.434s 00:04:46.277 user 0m4.996s 00:04:46.277 sys 0m0.354s 00:04:46.277 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.277 04:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.277 ************************************ 00:04:46.277 END TEST skip_rpc 00:04:46.277 ************************************ 00:04:46.277 04:40:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:46.277 04:40:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.277 04:40:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.277 04:40:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.277 ************************************ 00:04:46.277 START TEST skip_rpc_with_json 00:04:46.277 ************************************ 00:04:46.277 04:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:46.277 04:40:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:46.277 04:40:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2176610 00:04:46.277 04:40:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.277 04:40:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.277 04:40:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2176610 00:04:46.277 04:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2176610 ']' 00:04:46.277 04:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.277 04:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.277 04:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.277 04:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.277 04:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.277 [2024-10-28 04:40:36.646259] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:04:46.277 [2024-10-28 04:40:36.646366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176610 ] 00:04:46.277 [2024-10-28 04:40:36.778314] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:46.277 [2024-10-28 04:40:36.820261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.277 [2024-10-28 04:40:36.867250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.210 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.210 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:47.210 04:40:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:47.210 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.210 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.210 [2024-10-28 04:40:37.626993] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:47.210 request: 00:04:47.210 { 00:04:47.210 "trtype": "tcp", 00:04:47.210 "method": "nvmf_get_transports", 00:04:47.210 "req_id": 1 00:04:47.210 } 00:04:47.210 Got JSON-RPC error response 00:04:47.211 response: 00:04:47.211 { 00:04:47.211 "code": -19, 00:04:47.211 "message": "No such device" 00:04:47.211 } 00:04:47.211 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:47.211 04:40:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:47.211 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.211 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.211 [2024-10-28 04:40:37.635104] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.211 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.211 04:40:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:47.211 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.211 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.211 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.211 04:40:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:47.211 { 00:04:47.211 "subsystems": [ 00:04:47.211 { 00:04:47.211 "subsystem": "fsdev", 00:04:47.211 "config": [ 00:04:47.211 { 00:04:47.211 "method": "fsdev_set_opts", 00:04:47.211 "params": { 00:04:47.211 "fsdev_io_pool_size": 65535, 00:04:47.211 "fsdev_io_cache_size": 256 00:04:47.211 } 00:04:47.211 } 00:04:47.211 ] 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "subsystem": "vfio_user_target", 00:04:47.211 "config": null 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "subsystem": "keyring", 00:04:47.211 "config": [] 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "subsystem": "iobuf", 00:04:47.211 "config": [ 00:04:47.211 { 00:04:47.211 "method": "iobuf_set_options", 00:04:47.211 "params": { 00:04:47.211 "small_pool_count": 8192, 00:04:47.211 "large_pool_count": 1024, 00:04:47.211 "small_bufsize": 8192, 00:04:47.211 "large_bufsize": 135168, 00:04:47.211 "enable_numa": false 00:04:47.211 } 00:04:47.211 } 00:04:47.211 ] 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "subsystem": "sock", 00:04:47.211 "config": [ 00:04:47.211 { 00:04:47.211 "method": "sock_set_default_impl", 00:04:47.211 "params": { 00:04:47.211 "impl_name": "posix" 00:04:47.211 } 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "method": "sock_impl_set_options", 00:04:47.211 "params": { 00:04:47.211 "impl_name": "ssl", 00:04:47.211 "recv_buf_size": 4096, 00:04:47.211 "send_buf_size": 4096, 00:04:47.211 "enable_recv_pipe": true, 00:04:47.211 "enable_quickack": false, 00:04:47.211 "enable_placement_id": 0, 00:04:47.211 "enable_zerocopy_send_server": true, 00:04:47.211 "enable_zerocopy_send_client": false, 00:04:47.211 "zerocopy_threshold": 0, 00:04:47.211 "tls_version": 0, 00:04:47.211 "enable_ktls": false 00:04:47.211 } 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "method": "sock_impl_set_options", 00:04:47.211 "params": { 00:04:47.211 "impl_name": "posix", 00:04:47.211 "recv_buf_size": 2097152, 00:04:47.211 "send_buf_size": 2097152, 00:04:47.211 "enable_recv_pipe": true, 00:04:47.211 "enable_quickack": false, 00:04:47.211 "enable_placement_id": 0, 00:04:47.211 "enable_zerocopy_send_server": true, 00:04:47.211 "enable_zerocopy_send_client": false, 00:04:47.211 "zerocopy_threshold": 0, 00:04:47.211 "tls_version": 0, 00:04:47.211 "enable_ktls": false 00:04:47.211 } 00:04:47.211 } 00:04:47.211 ] 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "subsystem": "vmd", 00:04:47.211 "config": [] 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "subsystem": "accel", 00:04:47.211 "config": [ 00:04:47.211 { 00:04:47.211 "method": "accel_set_options", 00:04:47.211 "params": { 00:04:47.211 "small_cache_size": 128, 00:04:47.211 "large_cache_size": 16, 00:04:47.211 "task_count": 2048, 00:04:47.211 "sequence_count": 2048, 00:04:47.211 "buf_count": 2048 00:04:47.211 } 00:04:47.211 } 00:04:47.211 ] 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "subsystem": "bdev", 00:04:47.211 "config": [ 00:04:47.211 { 00:04:47.211 "method": "bdev_set_options", 00:04:47.211 "params": { 00:04:47.211 "bdev_io_pool_size": 65535, 00:04:47.211 "bdev_io_cache_size": 256, 00:04:47.211 "bdev_auto_examine": true, 00:04:47.211 "iobuf_small_cache_size": 128, 00:04:47.211 "iobuf_large_cache_size": 16 00:04:47.211 } 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "method": "bdev_raid_set_options", 00:04:47.211 "params": { 00:04:47.211 "process_window_size_kb": 1024, 00:04:47.211 "process_max_bandwidth_mb_sec": 0 00:04:47.211 } 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "method": "bdev_iscsi_set_options", 00:04:47.211 "params": { 00:04:47.211 "timeout_sec": 30 00:04:47.211 } 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "method": "bdev_nvme_set_options", 00:04:47.211 "params": { 00:04:47.211 "action_on_timeout": "none", 00:04:47.211 "timeout_us": 0, 00:04:47.211 "timeout_admin_us": 0, 00:04:47.211 "keep_alive_timeout_ms": 10000, 00:04:47.211 "arbitration_burst": 0, 00:04:47.211 "low_priority_weight": 0, 00:04:47.211 "medium_priority_weight": 0, 00:04:47.211 "high_priority_weight": 0, 00:04:47.211 "nvme_adminq_poll_period_us": 10000, 00:04:47.211 "nvme_ioq_poll_period_us": 0, 00:04:47.211 "io_queue_requests": 0, 00:04:47.211 "delay_cmd_submit": true, 00:04:47.211 "transport_retry_count": 4, 00:04:47.211 "bdev_retry_count": 3, 00:04:47.211 "transport_ack_timeout": 0, 00:04:47.211 "ctrlr_loss_timeout_sec": 0, 00:04:47.211 "reconnect_delay_sec": 0, 00:04:47.211 "fast_io_fail_timeout_sec": 0, 00:04:47.211 "disable_auto_failback": false, 00:04:47.211 "generate_uuids": false, 00:04:47.211 "transport_tos": 0, 00:04:47.211 "nvme_error_stat": false, 00:04:47.211 "rdma_srq_size": 0, 00:04:47.211 "io_path_stat": false, 00:04:47.211 "allow_accel_sequence": false, 00:04:47.211 "rdma_max_cq_size": 0, 00:04:47.211 "rdma_cm_event_timeout_ms": 0, 00:04:47.211 "dhchap_digests": [ 00:04:47.211 "sha256", 00:04:47.211 "sha384", 00:04:47.211 "sha512" 00:04:47.211 ], 00:04:47.211 "dhchap_dhgroups": [ 00:04:47.211 "null", 00:04:47.211 "ffdhe2048", 00:04:47.211 "ffdhe3072", 00:04:47.211 "ffdhe4096", 00:04:47.211 "ffdhe6144", 00:04:47.211 "ffdhe8192" 00:04:47.211 ] 00:04:47.211 } 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "method": "bdev_nvme_set_hotplug", 00:04:47.211 "params": { 00:04:47.211 "period_us": 100000, 00:04:47.211 "enable": false 00:04:47.211 } 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "method": "bdev_wait_for_examine" 00:04:47.211 } 00:04:47.211 ] 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "subsystem": "scsi", 00:04:47.211 "config": null 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "subsystem": "scheduler", 00:04:47.211 "config": [ 00:04:47.211 { 00:04:47.211 "method": "framework_set_scheduler", 00:04:47.211 "params": { 00:04:47.211 "name": "static" 00:04:47.211 } 00:04:47.211 } 00:04:47.211 ] 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "subsystem": "vhost_scsi", 00:04:47.211 "config": [] 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "subsystem": "vhost_blk", 00:04:47.211 "config": [] 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "subsystem": "ublk", 00:04:47.211 "config": [] 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "subsystem": "nbd", 00:04:47.211 "config": [] 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "subsystem": "nvmf", 00:04:47.211 "config": [ 00:04:47.211 { 00:04:47.211 "method": "nvmf_set_config", 00:04:47.211 "params": { 00:04:47.211 "discovery_filter": "match_any", 00:04:47.211 "admin_cmd_passthru": { 00:04:47.211 "identify_ctrlr": false 00:04:47.211 }, 00:04:47.211 "dhchap_digests": [ 00:04:47.211 "sha256", 00:04:47.211 "sha384", 00:04:47.211 "sha512" 00:04:47.211 ], 00:04:47.211 "dhchap_dhgroups": [ 00:04:47.211 "null", 00:04:47.211 "ffdhe2048", 00:04:47.211 "ffdhe3072", 00:04:47.211 "ffdhe4096", 00:04:47.211 "ffdhe6144", 00:04:47.211 "ffdhe8192" 00:04:47.211 ] 00:04:47.211 } 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "method": "nvmf_set_max_subsystems", 00:04:47.211 "params": { 00:04:47.211 "max_subsystems": 1024 00:04:47.211 } 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "method": "nvmf_set_crdt", 00:04:47.211 "params": { 00:04:47.211 "crdt1": 0, 00:04:47.211 "crdt2": 0, 00:04:47.211 "crdt3": 0 00:04:47.211 } 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "method": "nvmf_create_transport", 00:04:47.211 "params": { 00:04:47.211 "trtype": "TCP", 00:04:47.211 "max_queue_depth": 128, 00:04:47.211 "max_io_qpairs_per_ctrlr": 127, 00:04:47.211 "in_capsule_data_size": 4096, 00:04:47.211 "max_io_size": 131072, 00:04:47.211 "io_unit_size": 131072, 00:04:47.211 "max_aq_depth": 128, 00:04:47.211 "num_shared_buffers": 511, 00:04:47.211 "buf_cache_size": 4294967295, 00:04:47.211 "dif_insert_or_strip": false, 00:04:47.211 "zcopy": false, 00:04:47.211 "c2h_success": true, 00:04:47.211 "sock_priority": 0, 00:04:47.211 "abort_timeout_sec": 1, 00:04:47.211 "ack_timeout": 0, 00:04:47.211 "data_wr_pool_size": 0 00:04:47.211 } 00:04:47.211 } 00:04:47.211 ] 00:04:47.211 }, 00:04:47.211 { 00:04:47.211 "subsystem": "iscsi", 00:04:47.212 "config": [ 00:04:47.212 { 00:04:47.212 "method": "iscsi_set_options", 00:04:47.212 "params": { 00:04:47.212 "node_base": "iqn.2016-06.io.spdk", 00:04:47.212 "max_sessions": 128, 00:04:47.212 "max_connections_per_session": 2, 00:04:47.212 "max_queue_depth": 64, 00:04:47.212 "default_time2wait": 2, 00:04:47.212 "default_time2retain": 20, 00:04:47.212 "first_burst_length": 8192, 00:04:47.212 "immediate_data": true, 00:04:47.212 "allow_duplicated_isid": false, 00:04:47.212 "error_recovery_level": 0, 00:04:47.212 "nop_timeout": 60, 00:04:47.212 "nop_in_interval": 30, 00:04:47.212 "disable_chap": false, 00:04:47.212 "require_chap": false, 00:04:47.212 "mutual_chap": false, 00:04:47.212 "chap_group": 0, 00:04:47.212 "max_large_datain_per_connection": 64, 00:04:47.212 "max_r2t_per_connection": 4, 00:04:47.212 "pdu_pool_size": 36864, 00:04:47.212 "immediate_data_pool_size": 16384, 00:04:47.212 "data_out_pool_size": 2048 00:04:47.212 } 00:04:47.212 } 00:04:47.212 ] 00:04:47.212 } 00:04:47.212 ] 00:04:47.212 } 00:04:47.212 04:40:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:47.212 04:40:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2176610 00:04:47.212 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2176610 ']' 00:04:47.212 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2176610 00:04:47.212 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:47.212 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.212 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2176610 00:04:47.469 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.469 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.469 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2176610' 00:04:47.469 killing process with pid 2176610 00:04:47.469 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2176610 00:04:47.469 04:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2176610 00:04:47.727 04:40:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2176751 00:04:47.727 04:40:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:47.727 04:40:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:52.986 04:40:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2176751 00:04:52.986 04:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2176751 ']' 00:04:52.986 04:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2176751 00:04:52.986 04:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:52.986 04:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.986 04:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2176751 00:04:52.986 04:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.986 04:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.986 04:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2176751' 00:04:52.986 killing process with pid 2176751 00:04:52.986 04:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2176751 00:04:52.986 04:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2176751 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:53.245 00:04:53.245 real 0m7.114s 00:04:53.245 user 0m6.728s 00:04:53.245 sys 0m0.761s 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.245 ************************************ 00:04:53.245 END TEST skip_rpc_with_json 00:04:53.245 ************************************ 00:04:53.245 04:40:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:53.245 04:40:43 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.245 04:40:43 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.245 04:40:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.245 ************************************ 00:04:53.245 START TEST skip_rpc_with_delay 00:04:53.245 ************************************ 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.245 [2024-10-28 04:40:43.817237] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:53.245 00:04:53.245 real 0m0.074s 00:04:53.245 user 0m0.041s 00:04:53.245 sys 0m0.032s 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.245 04:40:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:53.245 ************************************ 00:04:53.245 END TEST skip_rpc_with_delay 00:04:53.245 ************************************ 00:04:53.504 04:40:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:53.504 04:40:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:53.504 04:40:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:53.504 04:40:43 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.504 04:40:43 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.504 04:40:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.504 ************************************ 00:04:53.504 START TEST exit_on_failed_rpc_init 00:04:53.504 ************************************ 00:04:53.504 04:40:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:53.504 04:40:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2177454 00:04:53.504 04:40:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.504 04:40:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2177454 00:04:53.504 04:40:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2177454 ']' 00:04:53.504 04:40:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.504 04:40:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.504 04:40:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.504 04:40:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.504 04:40:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.504 [2024-10-28 04:40:43.942538] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:04:53.504 [2024-10-28 04:40:43.942650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177454 ] 00:04:53.504 [2024-10-28 04:40:44.075235] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:53.762 [2024-10-28 04:40:44.118108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.762 [2024-10-28 04:40:44.168243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.697 04:40:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.697 04:40:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:54.697 04:40:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.697 04:40:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.697 04:40:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:54.697 04:40:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.697 04:40:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.697 04:40:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.697 04:40:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.697 04:40:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.697 04:40:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.697 04:40:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.697 04:40:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.697 04:40:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:54.697 04:40:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.697 [2024-10-28 04:40:44.991342] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:04:54.697 [2024-10-28 04:40:44.991433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177587 ] 00:04:54.697 [2024-10-28 04:40:45.123411] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:54.697 [2024-10-28 04:40:45.165907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.697 [2024-10-28 04:40:45.216134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.697 [2024-10-28 04:40:45.216244] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:54.697 [2024-10-28 04:40:45.216267] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:54.697 [2024-10-28 04:40:45.216281] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:54.697 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:54.697 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:54.697 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:54.697 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:54.697 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:54.697 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:54.697 04:40:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:54.697 04:40:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2177454 00:04:54.697 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2177454 ']' 00:04:54.697 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2177454 00:04:54.697 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:54.697 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.697 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2177454 00:04:54.955 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.955 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.955 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2177454' 00:04:54.955 killing process with pid 2177454 00:04:54.955 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2177454 00:04:54.955 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2177454 00:04:55.214 00:04:55.214 real 0m1.834s 00:04:55.214 user 0m2.039s 00:04:55.214 sys 0m0.492s 00:04:55.214 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.214 04:40:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.214 ************************************ 00:04:55.214 END TEST exit_on_failed_rpc_init 00:04:55.214 ************************************ 00:04:55.214 04:40:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:55.214 00:04:55.214 real 0m14.810s 00:04:55.214 user 0m13.983s 00:04:55.214 sys 0m1.834s 00:04:55.214 04:40:45 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.214 04:40:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.214 ************************************ 00:04:55.214 END TEST skip_rpc 00:04:55.214 ************************************ 00:04:55.214 04:40:45 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:55.214 04:40:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.214 04:40:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.214 04:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:55.214 ************************************ 00:04:55.214 START TEST rpc_client 00:04:55.214 ************************************ 00:04:55.214 04:40:45 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:55.472 * Looking for test storage... 00:04:55.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:55.472 04:40:45 rpc_client -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:55.472 04:40:45 rpc_client -- common/autotest_common.sh@1689 -- # lcov --version 00:04:55.472 04:40:45 rpc_client -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:55.472 04:40:45 rpc_client -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.472 04:40:45 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:55.472 04:40:45 rpc_client -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.472 04:40:45 rpc_client -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:55.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.472 --rc genhtml_branch_coverage=1 00:04:55.472 --rc genhtml_function_coverage=1 00:04:55.472 --rc genhtml_legend=1 00:04:55.472 --rc geninfo_all_blocks=1 00:04:55.472 --rc geninfo_unexecuted_blocks=1 00:04:55.472 00:04:55.472 ' 00:04:55.472 04:40:45 rpc_client -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:55.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.472 --rc genhtml_branch_coverage=1 00:04:55.472 --rc genhtml_function_coverage=1 00:04:55.472 --rc genhtml_legend=1 00:04:55.472 --rc geninfo_all_blocks=1 00:04:55.472 --rc geninfo_unexecuted_blocks=1 00:04:55.472 00:04:55.472 ' 00:04:55.472 04:40:45 rpc_client -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:55.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.472 --rc genhtml_branch_coverage=1 00:04:55.472 --rc genhtml_function_coverage=1 00:04:55.472 --rc genhtml_legend=1 00:04:55.473 --rc geninfo_all_blocks=1 00:04:55.473 --rc geninfo_unexecuted_blocks=1 00:04:55.473 00:04:55.473 ' 00:04:55.473 04:40:45 rpc_client -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:55.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.473 --rc genhtml_branch_coverage=1 00:04:55.473 --rc genhtml_function_coverage=1 00:04:55.473 --rc genhtml_legend=1 00:04:55.473 --rc geninfo_all_blocks=1 00:04:55.473 --rc geninfo_unexecuted_blocks=1 00:04:55.473 00:04:55.473 ' 00:04:55.473 04:40:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:55.473 OK 00:04:55.473 04:40:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:55.473 00:04:55.473 real 0m0.162s 00:04:55.473 user 0m0.095s 00:04:55.473 sys 0m0.076s 00:04:55.473 04:40:45 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.473 04:40:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:55.473 ************************************ 00:04:55.473 END TEST rpc_client 00:04:55.473 ************************************ 00:04:55.473 04:40:45 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:55.473 04:40:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.473 04:40:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.473 04:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:55.473 ************************************ 00:04:55.473 START TEST json_config 00:04:55.473 ************************************ 00:04:55.473 04:40:46 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:55.473 04:40:46 json_config -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:55.473 04:40:46 json_config -- common/autotest_common.sh@1689 -- # lcov --version 00:04:55.473 04:40:46 json_config -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:55.732 04:40:46 json_config -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:55.732 04:40:46 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.732 04:40:46 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.732 04:40:46 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.732 04:40:46 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.732 04:40:46 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.732 04:40:46 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.732 04:40:46 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.732 04:40:46 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.732 04:40:46 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.732 04:40:46 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.732 04:40:46 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.732 04:40:46 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:55.732 04:40:46 json_config -- scripts/common.sh@345 -- # : 1 00:04:55.732 04:40:46 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.732 04:40:46 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.732 04:40:46 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:55.732 04:40:46 json_config -- scripts/common.sh@353 -- # local d=1 00:04:55.732 04:40:46 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.732 04:40:46 json_config -- scripts/common.sh@355 -- # echo 1 00:04:55.732 04:40:46 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.732 04:40:46 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:55.732 04:40:46 json_config -- scripts/common.sh@353 -- # local d=2 00:04:55.732 04:40:46 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.732 04:40:46 json_config -- scripts/common.sh@355 -- # echo 2 00:04:55.732 04:40:46 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.732 04:40:46 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.732 04:40:46 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.732 04:40:46 json_config -- scripts/common.sh@368 -- # return 0 00:04:55.732 04:40:46 json_config -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.732 04:40:46 json_config -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:55.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.732 --rc genhtml_branch_coverage=1 00:04:55.732 --rc genhtml_function_coverage=1 00:04:55.732 --rc genhtml_legend=1 00:04:55.732 --rc geninfo_all_blocks=1 00:04:55.732 --rc geninfo_unexecuted_blocks=1 00:04:55.732 00:04:55.732 ' 00:04:55.732 04:40:46 json_config -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:55.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.732 --rc genhtml_branch_coverage=1 00:04:55.732 --rc genhtml_function_coverage=1 00:04:55.732 --rc genhtml_legend=1 00:04:55.732 --rc geninfo_all_blocks=1 00:04:55.732 --rc geninfo_unexecuted_blocks=1 00:04:55.732 00:04:55.732 ' 00:04:55.732 04:40:46 json_config -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:55.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.732 --rc genhtml_branch_coverage=1 00:04:55.732 --rc genhtml_function_coverage=1 00:04:55.732 --rc genhtml_legend=1 00:04:55.732 --rc geninfo_all_blocks=1 00:04:55.732 --rc geninfo_unexecuted_blocks=1 00:04:55.732 00:04:55.732 ' 00:04:55.732 04:40:46 json_config -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:55.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.732 --rc genhtml_branch_coverage=1 00:04:55.732 --rc genhtml_function_coverage=1 00:04:55.732 --rc genhtml_legend=1 00:04:55.732 --rc geninfo_all_blocks=1 00:04:55.732 --rc geninfo_unexecuted_blocks=1 00:04:55.732 00:04:55.732 ' 00:04:55.732 04:40:46 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:55.732 04:40:46 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.732 04:40:46 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.732 04:40:46 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.732 04:40:46 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.732 04:40:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.732 04:40:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.732 04:40:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.732 04:40:46 json_config -- paths/export.sh@5 -- # export PATH 00:04:55.732 04:40:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@51 -- # : 0 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:55.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:55.732 04:40:46 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:55.732 04:40:46 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:55.732 04:40:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:55.732 04:40:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:55.732 04:40:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:55.732 04:40:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:55.732 04:40:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:55.733 04:40:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:55.733 04:40:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:55.733 04:40:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:55.733 04:40:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:55.733 04:40:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:55.733 04:40:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:55.733 04:40:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:55.733 04:40:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:55.733 04:40:46 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:55.733 04:40:46 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:55.733 INFO: JSON configuration test init 00:04:55.733 04:40:46 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:55.733 04:40:46 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:55.733 04:40:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.733 04:40:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.733 04:40:46 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:55.733 04:40:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.733 04:40:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.733 04:40:46 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:55.733 04:40:46 json_config -- json_config/common.sh@9 -- # local app=target 00:04:55.733 04:40:46 json_config -- json_config/common.sh@10 -- # shift 00:04:55.733 04:40:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.733 04:40:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.733 04:40:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.733 04:40:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.733 04:40:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.733 04:40:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2177847 00:04:55.733 04:40:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:55.733 04:40:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.733 Waiting for target to run... 00:04:55.733 04:40:46 json_config -- json_config/common.sh@25 -- # waitforlisten 2177847 /var/tmp/spdk_tgt.sock 00:04:55.733 04:40:46 json_config -- common/autotest_common.sh@831 -- # '[' -z 2177847 ']' 00:04:55.733 04:40:46 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.733 04:40:46 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.733 04:40:46 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.733 04:40:46 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.733 04:40:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.733 [2024-10-28 04:40:46.215479] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:04:55.733 [2024-10-28 04:40:46.215564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177847 ] 00:04:56.301 [2024-10-28 04:40:46.641671] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:56.301 [2024-10-28 04:40:46.685298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.301 [2024-10-28 04:40:46.719726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.868 04:40:47 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.868 04:40:47 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:56.868 04:40:47 json_config -- json_config/common.sh@26 -- # echo '' 00:04:56.868 00:04:56.868 04:40:47 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:56.868 04:40:47 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:56.868 04:40:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.868 04:40:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.868 04:40:47 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:56.868 04:40:47 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:56.868 04:40:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:56.868 04:40:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.868 04:40:47 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:56.868 04:40:47 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:56.868 04:40:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:00.152 04:40:50 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:00.152 04:40:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:00.152 04:40:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.152 04:40:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.152 04:40:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:00.152 04:40:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:00.152 04:40:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:00.152 04:40:50 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:00.152 04:40:50 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:00.152 04:40:50 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:00.152 04:40:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:00.152 04:40:50 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@54 -- # sort 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:00.410 04:40:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.410 04:40:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:00.410 04:40:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.410 04:40:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:00.410 04:40:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:00.410 04:40:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:00.668 MallocForNvmf0 00:05:00.668 04:40:51 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:00.668 04:40:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:00.926 MallocForNvmf1 00:05:00.926 04:40:51 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:00.926 04:40:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:01.184 [2024-10-28 04:40:51.594286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.184 04:40:51 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:01.184 04:40:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:01.443 04:40:51 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:01.443 04:40:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:01.701 04:40:52 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:01.701 04:40:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:01.959 04:40:52 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:01.959 04:40:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:02.217 [2024-10-28 04:40:52.675193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:02.217 04:40:52 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:02.217 04:40:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.217 04:40:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.217 04:40:52 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:02.217 04:40:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.217 04:40:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.217 04:40:52 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:02.217 04:40:52 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:02.217 04:40:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:02.475 MallocBdevForConfigChangeCheck 00:05:02.475 04:40:53 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:02.475 04:40:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.475 04:40:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.475 04:40:53 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:02.475 04:40:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:03.041 04:40:53 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:03.041 INFO: shutting down applications... 00:05:03.041 04:40:53 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:03.041 04:40:53 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:03.041 04:40:53 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:03.041 04:40:53 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:04.941 Calling clear_iscsi_subsystem 00:05:04.941 Calling clear_nvmf_subsystem 00:05:04.941 Calling clear_nbd_subsystem 00:05:04.941 Calling clear_ublk_subsystem 00:05:04.941 Calling clear_vhost_blk_subsystem 00:05:04.941 Calling clear_vhost_scsi_subsystem 00:05:04.941 Calling clear_bdev_subsystem 00:05:04.941 04:40:55 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:04.941 04:40:55 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:04.941 04:40:55 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:04.941 04:40:55 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.941 04:40:55 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:04.941 04:40:55 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:04.941 04:40:55 json_config -- json_config/json_config.sh@352 -- # break 00:05:04.941 04:40:55 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:04.941 04:40:55 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:04.941 04:40:55 json_config -- json_config/common.sh@31 -- # local app=target 00:05:04.941 04:40:55 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:04.941 04:40:55 json_config -- json_config/common.sh@35 -- # [[ -n 2177847 ]] 00:05:04.941 04:40:55 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2177847 00:05:04.941 04:40:55 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:04.941 04:40:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.941 04:40:55 json_config -- json_config/common.sh@41 -- # kill -0 2177847 00:05:04.941 04:40:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.508 04:40:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.508 04:40:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.508 04:40:56 json_config -- json_config/common.sh@41 -- # kill -0 2177847 00:05:05.508 04:40:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:05.508 04:40:56 json_config -- json_config/common.sh@43 -- # break 00:05:05.508 04:40:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:05.508 04:40:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:05.508 SPDK target shutdown done 00:05:05.508 04:40:56 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:05.508 INFO: relaunching applications... 00:05:05.508 04:40:56 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.508 04:40:56 json_config -- json_config/common.sh@9 -- # local app=target 00:05:05.508 04:40:56 json_config -- json_config/common.sh@10 -- # shift 00:05:05.508 04:40:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:05.508 04:40:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:05.508 04:40:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:05.508 04:40:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.508 04:40:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.508 04:40:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2179143 00:05:05.508 04:40:56 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.508 04:40:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:05.508 Waiting for target to run... 00:05:05.508 04:40:56 json_config -- json_config/common.sh@25 -- # waitforlisten 2179143 /var/tmp/spdk_tgt.sock 00:05:05.508 04:40:56 json_config -- common/autotest_common.sh@831 -- # '[' -z 2179143 ']' 00:05:05.508 04:40:56 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:05.508 04:40:56 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.508 04:40:56 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:05.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:05.508 04:40:56 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.508 04:40:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.508 [2024-10-28 04:40:56.085677] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:05.508 [2024-10-28 04:40:56.085791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179143 ] 00:05:06.440 [2024-10-28 04:40:56.673294] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:06.440 [2024-10-28 04:40:56.716982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.440 [2024-10-28 04:40:56.759577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.721 [2024-10-28 04:40:59.806619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.721 [2024-10-28 04:40:59.838991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:10.287 04:41:00 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.287 04:41:00 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:10.287 04:41:00 json_config -- json_config/common.sh@26 -- # echo '' 00:05:10.287 00:05:10.287 04:41:00 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:10.287 04:41:00 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:10.287 INFO: Checking if target configuration is the same... 00:05:10.287 04:41:00 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.287 04:41:00 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:10.287 04:41:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.287 + '[' 2 -ne 2 ']' 00:05:10.287 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:10.287 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:10.287 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:10.287 +++ basename /dev/fd/62 00:05:10.287 ++ mktemp /tmp/62.XXX 00:05:10.287 + tmp_file_1=/tmp/62.0tn 00:05:10.287 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.287 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:10.287 + tmp_file_2=/tmp/spdk_tgt_config.json.b1O 00:05:10.287 + ret=0 00:05:10.287 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:10.545 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:10.545 + diff -u /tmp/62.0tn /tmp/spdk_tgt_config.json.b1O 00:05:10.545 + echo 'INFO: JSON config files are the same' 00:05:10.545 INFO: JSON config files are the same 00:05:10.545 + rm /tmp/62.0tn /tmp/spdk_tgt_config.json.b1O 00:05:10.545 + exit 0 00:05:10.545 04:41:01 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:10.545 04:41:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:10.545 INFO: changing configuration and checking if this can be detected... 00:05:10.545 04:41:01 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:10.545 04:41:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:10.802 04:41:01 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.802 04:41:01 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:10.802 04:41:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.802 + '[' 2 -ne 2 ']' 00:05:10.802 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:10.802 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:10.802 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:10.802 +++ basename /dev/fd/62 00:05:10.802 ++ mktemp /tmp/62.XXX 00:05:10.802 + tmp_file_1=/tmp/62.FGi 00:05:10.802 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.802 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:10.802 + tmp_file_2=/tmp/spdk_tgt_config.json.Bzr 00:05:10.802 + ret=0 00:05:10.802 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:11.369 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:11.369 + diff -u /tmp/62.FGi /tmp/spdk_tgt_config.json.Bzr 00:05:11.369 + ret=1 00:05:11.369 + echo '=== Start of file: /tmp/62.FGi ===' 00:05:11.369 + cat /tmp/62.FGi 00:05:11.369 + echo '=== End of file: /tmp/62.FGi ===' 00:05:11.369 + echo '' 00:05:11.369 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Bzr ===' 00:05:11.369 + cat /tmp/spdk_tgt_config.json.Bzr 00:05:11.369 + echo '=== End of file: /tmp/spdk_tgt_config.json.Bzr ===' 00:05:11.369 + echo '' 00:05:11.369 + rm /tmp/62.FGi /tmp/spdk_tgt_config.json.Bzr 00:05:11.369 + exit 1 00:05:11.369 04:41:01 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:11.369 INFO: configuration change detected. 00:05:11.369 04:41:01 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:11.369 04:41:01 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:11.369 04:41:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.369 04:41:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.369 04:41:01 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:11.369 04:41:01 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:11.369 04:41:01 json_config -- json_config/json_config.sh@324 -- # [[ -n 2179143 ]] 00:05:11.369 04:41:01 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:11.369 04:41:01 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:11.369 04:41:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.369 04:41:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.369 04:41:01 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:11.369 04:41:01 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:11.369 04:41:01 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:11.369 04:41:01 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:11.369 04:41:01 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:11.369 04:41:01 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:11.369 04:41:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.369 04:41:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.369 04:41:01 json_config -- json_config/json_config.sh@330 -- # killprocess 2179143 00:05:11.369 04:41:01 json_config -- common/autotest_common.sh@950 -- # '[' -z 2179143 ']' 00:05:11.369 04:41:01 json_config -- common/autotest_common.sh@954 -- # kill -0 2179143 00:05:11.369 04:41:01 json_config -- common/autotest_common.sh@955 -- # uname 00:05:11.369 04:41:01 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.369 04:41:01 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2179143 00:05:11.369 04:41:01 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.369 04:41:01 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.369 04:41:01 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2179143' 00:05:11.369 killing process with pid 2179143 00:05:11.369 04:41:01 json_config -- common/autotest_common.sh@969 -- # kill 2179143 00:05:11.369 04:41:01 json_config -- common/autotest_common.sh@974 -- # wait 2179143 00:05:13.268 04:41:03 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.268 04:41:03 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:13.268 04:41:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:13.268 04:41:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.268 04:41:03 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:13.268 04:41:03 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:13.268 INFO: Success 00:05:13.268 00:05:13.268 real 0m17.570s 00:05:13.268 user 0m19.352s 00:05:13.268 sys 0m2.701s 00:05:13.268 04:41:03 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.268 04:41:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.268 ************************************ 00:05:13.268 END TEST json_config 00:05:13.268 ************************************ 00:05:13.268 04:41:03 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.268 04:41:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.268 04:41:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.268 04:41:03 -- common/autotest_common.sh@10 -- # set +x 00:05:13.268 ************************************ 00:05:13.268 START TEST json_config_extra_key 00:05:13.268 ************************************ 00:05:13.268 04:41:03 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.268 04:41:03 json_config_extra_key -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:13.268 04:41:03 json_config_extra_key -- common/autotest_common.sh@1689 -- # lcov --version 00:05:13.268 04:41:03 json_config_extra_key -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:13.268 04:41:03 json_config_extra_key -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.268 04:41:03 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.269 04:41:03 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:13.269 04:41:03 json_config_extra_key -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.269 04:41:03 json_config_extra_key -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:13.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.269 --rc genhtml_branch_coverage=1 00:05:13.269 --rc genhtml_function_coverage=1 00:05:13.269 --rc genhtml_legend=1 00:05:13.269 --rc geninfo_all_blocks=1 00:05:13.269 --rc geninfo_unexecuted_blocks=1 00:05:13.269 00:05:13.269 ' 00:05:13.269 04:41:03 json_config_extra_key -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:13.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.269 --rc genhtml_branch_coverage=1 00:05:13.269 --rc genhtml_function_coverage=1 00:05:13.269 --rc genhtml_legend=1 00:05:13.269 --rc geninfo_all_blocks=1 00:05:13.269 --rc geninfo_unexecuted_blocks=1 00:05:13.269 00:05:13.269 ' 00:05:13.269 04:41:03 json_config_extra_key -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:13.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.269 --rc genhtml_branch_coverage=1 00:05:13.269 --rc genhtml_function_coverage=1 00:05:13.269 --rc genhtml_legend=1 00:05:13.269 --rc geninfo_all_blocks=1 00:05:13.269 --rc geninfo_unexecuted_blocks=1 00:05:13.269 00:05:13.269 ' 00:05:13.269 04:41:03 json_config_extra_key -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:13.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.269 --rc genhtml_branch_coverage=1 00:05:13.269 --rc genhtml_function_coverage=1 00:05:13.269 --rc genhtml_legend=1 00:05:13.269 --rc geninfo_all_blocks=1 00:05:13.269 --rc geninfo_unexecuted_blocks=1 00:05:13.269 00:05:13.269 ' 00:05:13.269 04:41:03 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.269 04:41:03 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.269 04:41:03 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.269 04:41:03 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.269 04:41:03 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.269 04:41:03 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.269 04:41:03 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.269 04:41:03 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.269 04:41:03 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:13.269 04:41:03 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.269 04:41:03 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.269 04:41:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:13.269 04:41:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:13.269 04:41:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:13.269 04:41:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:13.269 04:41:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:13.269 04:41:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:13.269 04:41:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:13.269 04:41:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:13.269 04:41:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:13.269 04:41:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:13.269 04:41:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:13.269 INFO: launching applications... 00:05:13.269 04:41:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:13.269 04:41:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:13.269 04:41:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:13.269 04:41:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.269 04:41:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.269 04:41:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.269 04:41:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.269 04:41:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.269 04:41:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2180168 00:05:13.269 04:41:03 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:13.269 04:41:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.269 Waiting for target to run... 00:05:13.269 04:41:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2180168 /var/tmp/spdk_tgt.sock 00:05:13.269 04:41:03 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2180168 ']' 00:05:13.269 04:41:03 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.269 04:41:03 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.269 04:41:03 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.269 04:41:03 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.269 04:41:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:13.269 [2024-10-28 04:41:03.811728] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:13.269 [2024-10-28 04:41:03.811812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180168 ] 00:05:13.835 [2024-10-28 04:41:04.215535] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:13.835 [2024-10-28 04:41:04.258081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.835 [2024-10-28 04:41:04.295069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.400 04:41:04 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.400 04:41:04 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:14.400 04:41:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:14.400 00:05:14.400 04:41:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:14.400 INFO: shutting down applications... 00:05:14.400 04:41:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:14.400 04:41:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:14.400 04:41:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:14.400 04:41:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2180168 ]] 00:05:14.400 04:41:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2180168 00:05:14.400 04:41:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:14.400 04:41:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.400 04:41:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2180168 00:05:14.400 04:41:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.966 04:41:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.966 04:41:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.966 04:41:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2180168 00:05:14.966 04:41:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:14.966 04:41:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:14.966 04:41:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:14.966 04:41:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:14.966 SPDK target shutdown done 00:05:14.966 04:41:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:14.966 Success 00:05:14.966 00:05:14.966 real 0m1.680s 00:05:14.966 user 0m1.591s 00:05:14.966 sys 0m0.447s 00:05:14.966 04:41:05 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.966 04:41:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:14.966 ************************************ 00:05:14.966 END TEST json_config_extra_key 00:05:14.966 ************************************ 00:05:14.966 04:41:05 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:14.966 04:41:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.966 04:41:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.966 04:41:05 -- common/autotest_common.sh@10 -- # set +x 00:05:14.966 ************************************ 00:05:14.966 START TEST alias_rpc 00:05:14.966 ************************************ 00:05:14.966 04:41:05 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:14.966 * Looking for test storage... 00:05:14.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:14.966 04:41:05 alias_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:14.966 04:41:05 alias_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:05:14.966 04:41:05 alias_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:14.966 04:41:05 alias_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.966 04:41:05 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:14.966 04:41:05 alias_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.966 04:41:05 alias_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:14.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.966 --rc genhtml_branch_coverage=1 00:05:14.966 --rc genhtml_function_coverage=1 00:05:14.966 --rc genhtml_legend=1 00:05:14.966 --rc geninfo_all_blocks=1 00:05:14.966 --rc geninfo_unexecuted_blocks=1 00:05:14.966 00:05:14.966 ' 00:05:14.966 04:41:05 alias_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:14.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.966 --rc genhtml_branch_coverage=1 00:05:14.966 --rc genhtml_function_coverage=1 00:05:14.966 --rc genhtml_legend=1 00:05:14.966 --rc geninfo_all_blocks=1 00:05:14.966 --rc geninfo_unexecuted_blocks=1 00:05:14.966 00:05:14.966 ' 00:05:14.966 04:41:05 alias_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:14.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.966 --rc genhtml_branch_coverage=1 00:05:14.966 --rc genhtml_function_coverage=1 00:05:14.966 --rc genhtml_legend=1 00:05:14.966 --rc geninfo_all_blocks=1 00:05:14.966 --rc geninfo_unexecuted_blocks=1 00:05:14.966 00:05:14.966 ' 00:05:14.966 04:41:05 alias_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:14.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.966 --rc genhtml_branch_coverage=1 00:05:14.966 --rc genhtml_function_coverage=1 00:05:14.966 --rc genhtml_legend=1 00:05:14.966 --rc geninfo_all_blocks=1 00:05:14.966 --rc geninfo_unexecuted_blocks=1 00:05:14.966 00:05:14.966 ' 00:05:14.966 04:41:05 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:14.966 04:41:05 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2180478 00:05:14.966 04:41:05 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.966 04:41:05 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2180478 00:05:14.966 04:41:05 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2180478 ']' 00:05:14.966 04:41:05 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.966 04:41:05 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.966 04:41:05 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.966 04:41:05 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.966 04:41:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.966 [2024-10-28 04:41:05.542493] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:14.966 [2024-10-28 04:41:05.542571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180478 ] 00:05:15.225 [2024-10-28 04:41:05.675270] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:15.225 [2024-10-28 04:41:05.716020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.225 [2024-10-28 04:41:05.767452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.158 04:41:06 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.158 04:41:06 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:16.158 04:41:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:16.416 04:41:06 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2180478 00:05:16.416 04:41:06 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2180478 ']' 00:05:16.416 04:41:06 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2180478 00:05:16.416 04:41:06 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:16.416 04:41:06 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.416 04:41:06 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2180478 00:05:16.416 04:41:06 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.416 04:41:06 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.416 04:41:06 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2180478' 00:05:16.416 killing process with pid 2180478 00:05:16.416 04:41:06 alias_rpc -- common/autotest_common.sh@969 -- # kill 2180478 00:05:16.416 04:41:06 alias_rpc -- common/autotest_common.sh@974 -- # wait 2180478 00:05:16.674 00:05:16.674 real 0m1.875s 00:05:16.674 user 0m2.084s 00:05:16.674 sys 0m0.487s 00:05:16.674 04:41:07 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.674 04:41:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.674 ************************************ 00:05:16.674 END TEST alias_rpc 00:05:16.674 ************************************ 00:05:16.674 04:41:07 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:16.674 04:41:07 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:16.674 04:41:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.674 04:41:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.674 04:41:07 -- common/autotest_common.sh@10 -- # set +x 00:05:16.932 ************************************ 00:05:16.932 START TEST spdkcli_tcp 00:05:16.932 ************************************ 00:05:16.932 04:41:07 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:16.932 * Looking for test storage... 00:05:16.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:16.932 04:41:07 spdkcli_tcp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:16.932 04:41:07 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lcov --version 00:05:16.932 04:41:07 spdkcli_tcp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:16.932 04:41:07 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.933 04:41:07 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:16.933 04:41:07 spdkcli_tcp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.933 04:41:07 spdkcli_tcp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:16.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.933 --rc genhtml_branch_coverage=1 00:05:16.933 --rc genhtml_function_coverage=1 00:05:16.933 --rc genhtml_legend=1 00:05:16.933 --rc geninfo_all_blocks=1 00:05:16.933 --rc geninfo_unexecuted_blocks=1 00:05:16.933 00:05:16.933 ' 00:05:16.933 04:41:07 spdkcli_tcp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:16.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.933 --rc genhtml_branch_coverage=1 00:05:16.933 --rc genhtml_function_coverage=1 00:05:16.933 --rc genhtml_legend=1 00:05:16.933 --rc geninfo_all_blocks=1 00:05:16.933 --rc geninfo_unexecuted_blocks=1 00:05:16.933 00:05:16.933 ' 00:05:16.933 04:41:07 spdkcli_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:16.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.933 --rc genhtml_branch_coverage=1 00:05:16.933 --rc genhtml_function_coverage=1 00:05:16.933 --rc genhtml_legend=1 00:05:16.933 --rc geninfo_all_blocks=1 00:05:16.933 --rc geninfo_unexecuted_blocks=1 00:05:16.933 00:05:16.933 ' 00:05:16.933 04:41:07 spdkcli_tcp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:16.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.933 --rc genhtml_branch_coverage=1 00:05:16.933 --rc genhtml_function_coverage=1 00:05:16.933 --rc genhtml_legend=1 00:05:16.933 --rc geninfo_all_blocks=1 00:05:16.933 --rc geninfo_unexecuted_blocks=1 00:05:16.933 00:05:16.933 ' 00:05:16.933 04:41:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:16.933 04:41:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:16.933 04:41:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:16.933 04:41:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:16.933 04:41:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:16.933 04:41:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:16.933 04:41:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:16.933 04:41:07 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.933 04:41:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.933 04:41:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2180683 00:05:16.933 04:41:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:16.933 04:41:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2180683 00:05:16.933 04:41:07 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2180683 ']' 00:05:16.933 04:41:07 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.933 04:41:07 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.933 04:41:07 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.933 04:41:07 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.933 04:41:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.933 [2024-10-28 04:41:07.473496] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:16.933 [2024-10-28 04:41:07.473584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180683 ] 00:05:17.191 [2024-10-28 04:41:07.604732] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:17.191 [2024-10-28 04:41:07.645086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.191 [2024-10-28 04:41:07.699698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.191 [2024-10-28 04:41:07.699703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.125 04:41:08 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.125 04:41:08 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:18.125 04:41:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2180819 00:05:18.125 04:41:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:18.125 04:41:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:18.125 [ 00:05:18.125 "bdev_malloc_delete", 00:05:18.125 "bdev_malloc_create", 00:05:18.125 "bdev_null_resize", 00:05:18.125 "bdev_null_delete", 00:05:18.125 "bdev_null_create", 00:05:18.125 "bdev_nvme_cuse_unregister", 00:05:18.125 "bdev_nvme_cuse_register", 00:05:18.125 "bdev_opal_new_user", 00:05:18.125 "bdev_opal_set_lock_state", 00:05:18.125 "bdev_opal_delete", 00:05:18.125 "bdev_opal_get_info", 00:05:18.125 "bdev_opal_create", 00:05:18.125 "bdev_nvme_opal_revert", 00:05:18.125 "bdev_nvme_opal_init", 00:05:18.125 "bdev_nvme_send_cmd", 00:05:18.125 "bdev_nvme_set_keys", 00:05:18.125 "bdev_nvme_get_path_iostat", 00:05:18.125 "bdev_nvme_get_mdns_discovery_info", 00:05:18.125 "bdev_nvme_stop_mdns_discovery", 00:05:18.125 "bdev_nvme_start_mdns_discovery", 00:05:18.125 "bdev_nvme_set_multipath_policy", 00:05:18.125 "bdev_nvme_set_preferred_path", 00:05:18.125 "bdev_nvme_get_io_paths", 00:05:18.125 "bdev_nvme_remove_error_injection", 00:05:18.125 "bdev_nvme_add_error_injection", 00:05:18.125 "bdev_nvme_get_discovery_info", 00:05:18.125 "bdev_nvme_stop_discovery", 00:05:18.125 "bdev_nvme_start_discovery", 00:05:18.125 "bdev_nvme_get_controller_health_info", 00:05:18.125 "bdev_nvme_disable_controller", 00:05:18.125 "bdev_nvme_enable_controller", 00:05:18.125 "bdev_nvme_reset_controller", 00:05:18.125 "bdev_nvme_get_transport_statistics", 00:05:18.125 "bdev_nvme_apply_firmware", 00:05:18.125 "bdev_nvme_detach_controller", 00:05:18.125 "bdev_nvme_get_controllers", 00:05:18.125 "bdev_nvme_attach_controller", 00:05:18.125 "bdev_nvme_set_hotplug", 00:05:18.125 "bdev_nvme_set_options", 00:05:18.125 "bdev_passthru_delete", 00:05:18.125 "bdev_passthru_create", 00:05:18.125 "bdev_lvol_set_parent_bdev", 00:05:18.125 "bdev_lvol_set_parent", 00:05:18.125 "bdev_lvol_check_shallow_copy", 00:05:18.125 "bdev_lvol_start_shallow_copy", 00:05:18.125 "bdev_lvol_grow_lvstore", 00:05:18.125 "bdev_lvol_get_lvols", 00:05:18.125 "bdev_lvol_get_lvstores", 00:05:18.125 "bdev_lvol_delete", 00:05:18.125 "bdev_lvol_set_read_only", 00:05:18.125 "bdev_lvol_resize", 00:05:18.125 "bdev_lvol_decouple_parent", 00:05:18.125 "bdev_lvol_inflate", 00:05:18.125 "bdev_lvol_rename", 00:05:18.125 "bdev_lvol_clone_bdev", 00:05:18.125 "bdev_lvol_clone", 00:05:18.125 "bdev_lvol_snapshot", 00:05:18.125 "bdev_lvol_create", 00:05:18.125 "bdev_lvol_delete_lvstore", 00:05:18.125 "bdev_lvol_rename_lvstore", 00:05:18.125 "bdev_lvol_create_lvstore", 00:05:18.125 "bdev_raid_set_options", 00:05:18.125 "bdev_raid_remove_base_bdev", 00:05:18.125 "bdev_raid_add_base_bdev", 00:05:18.125 "bdev_raid_delete", 00:05:18.125 "bdev_raid_create", 00:05:18.125 "bdev_raid_get_bdevs", 00:05:18.125 "bdev_error_inject_error", 00:05:18.125 "bdev_error_delete", 00:05:18.125 "bdev_error_create", 00:05:18.125 "bdev_split_delete", 00:05:18.125 "bdev_split_create", 00:05:18.125 "bdev_delay_delete", 00:05:18.125 "bdev_delay_create", 00:05:18.125 "bdev_delay_update_latency", 00:05:18.125 "bdev_zone_block_delete", 00:05:18.125 "bdev_zone_block_create", 00:05:18.125 "blobfs_create", 00:05:18.125 "blobfs_detect", 00:05:18.125 "blobfs_set_cache_size", 00:05:18.125 "bdev_aio_delete", 00:05:18.125 "bdev_aio_rescan", 00:05:18.125 "bdev_aio_create", 00:05:18.125 "bdev_ftl_set_property", 00:05:18.125 "bdev_ftl_get_properties", 00:05:18.125 "bdev_ftl_get_stats", 00:05:18.125 "bdev_ftl_unmap", 00:05:18.125 "bdev_ftl_unload", 00:05:18.125 "bdev_ftl_delete", 00:05:18.125 "bdev_ftl_load", 00:05:18.125 "bdev_ftl_create", 00:05:18.125 "bdev_virtio_attach_controller", 00:05:18.125 "bdev_virtio_scsi_get_devices", 00:05:18.125 "bdev_virtio_detach_controller", 00:05:18.125 "bdev_virtio_blk_set_hotplug", 00:05:18.125 "bdev_iscsi_delete", 00:05:18.125 "bdev_iscsi_create", 00:05:18.125 "bdev_iscsi_set_options", 00:05:18.125 "accel_error_inject_error", 00:05:18.125 "ioat_scan_accel_module", 00:05:18.125 "dsa_scan_accel_module", 00:05:18.125 "iaa_scan_accel_module", 00:05:18.125 "vfu_virtio_create_fs_endpoint", 00:05:18.125 "vfu_virtio_create_scsi_endpoint", 00:05:18.125 "vfu_virtio_scsi_remove_target", 00:05:18.125 "vfu_virtio_scsi_add_target", 00:05:18.125 "vfu_virtio_create_blk_endpoint", 00:05:18.125 "vfu_virtio_delete_endpoint", 00:05:18.125 "keyring_file_remove_key", 00:05:18.125 "keyring_file_add_key", 00:05:18.125 "keyring_linux_set_options", 00:05:18.125 "fsdev_aio_delete", 00:05:18.125 "fsdev_aio_create", 00:05:18.125 "iscsi_get_histogram", 00:05:18.125 "iscsi_enable_histogram", 00:05:18.125 "iscsi_set_options", 00:05:18.125 "iscsi_get_auth_groups", 00:05:18.125 "iscsi_auth_group_remove_secret", 00:05:18.125 "iscsi_auth_group_add_secret", 00:05:18.125 "iscsi_delete_auth_group", 00:05:18.125 "iscsi_create_auth_group", 00:05:18.125 "iscsi_set_discovery_auth", 00:05:18.125 "iscsi_get_options", 00:05:18.125 "iscsi_target_node_request_logout", 00:05:18.125 "iscsi_target_node_set_redirect", 00:05:18.125 "iscsi_target_node_set_auth", 00:05:18.125 "iscsi_target_node_add_lun", 00:05:18.125 "iscsi_get_stats", 00:05:18.125 "iscsi_get_connections", 00:05:18.125 "iscsi_portal_group_set_auth", 00:05:18.125 "iscsi_start_portal_group", 00:05:18.125 "iscsi_delete_portal_group", 00:05:18.125 "iscsi_create_portal_group", 00:05:18.125 "iscsi_get_portal_groups", 00:05:18.125 "iscsi_delete_target_node", 00:05:18.125 "iscsi_target_node_remove_pg_ig_maps", 00:05:18.125 "iscsi_target_node_add_pg_ig_maps", 00:05:18.125 "iscsi_create_target_node", 00:05:18.125 "iscsi_get_target_nodes", 00:05:18.125 "iscsi_delete_initiator_group", 00:05:18.125 "iscsi_initiator_group_remove_initiators", 00:05:18.125 "iscsi_initiator_group_add_initiators", 00:05:18.125 "iscsi_create_initiator_group", 00:05:18.125 "iscsi_get_initiator_groups", 00:05:18.125 "nvmf_set_crdt", 00:05:18.125 "nvmf_set_config", 00:05:18.125 "nvmf_set_max_subsystems", 00:05:18.125 "nvmf_stop_mdns_prr", 00:05:18.125 "nvmf_publish_mdns_prr", 00:05:18.125 "nvmf_subsystem_get_listeners", 00:05:18.125 "nvmf_subsystem_get_qpairs", 00:05:18.125 "nvmf_subsystem_get_controllers", 00:05:18.125 "nvmf_get_stats", 00:05:18.125 "nvmf_get_transports", 00:05:18.125 "nvmf_create_transport", 00:05:18.125 "nvmf_get_targets", 00:05:18.125 "nvmf_delete_target", 00:05:18.125 "nvmf_create_target", 00:05:18.125 "nvmf_subsystem_allow_any_host", 00:05:18.125 "nvmf_subsystem_set_keys", 00:05:18.125 "nvmf_subsystem_remove_host", 00:05:18.125 "nvmf_subsystem_add_host", 00:05:18.125 "nvmf_ns_remove_host", 00:05:18.125 "nvmf_ns_add_host", 00:05:18.125 "nvmf_subsystem_remove_ns", 00:05:18.125 "nvmf_subsystem_set_ns_ana_group", 00:05:18.125 "nvmf_subsystem_add_ns", 00:05:18.125 "nvmf_subsystem_listener_set_ana_state", 00:05:18.125 "nvmf_discovery_get_referrals", 00:05:18.125 "nvmf_discovery_remove_referral", 00:05:18.125 "nvmf_discovery_add_referral", 00:05:18.125 "nvmf_subsystem_remove_listener", 00:05:18.125 "nvmf_subsystem_add_listener", 00:05:18.125 "nvmf_delete_subsystem", 00:05:18.125 "nvmf_create_subsystem", 00:05:18.125 "nvmf_get_subsystems", 00:05:18.125 "env_dpdk_get_mem_stats", 00:05:18.125 "nbd_get_disks", 00:05:18.125 "nbd_stop_disk", 00:05:18.125 "nbd_start_disk", 00:05:18.125 "ublk_recover_disk", 00:05:18.125 "ublk_get_disks", 00:05:18.125 "ublk_stop_disk", 00:05:18.125 "ublk_start_disk", 00:05:18.125 "ublk_destroy_target", 00:05:18.125 "ublk_create_target", 00:05:18.125 "virtio_blk_create_transport", 00:05:18.125 "virtio_blk_get_transports", 00:05:18.125 "vhost_controller_set_coalescing", 00:05:18.125 "vhost_get_controllers", 00:05:18.125 "vhost_delete_controller", 00:05:18.125 "vhost_create_blk_controller", 00:05:18.125 "vhost_scsi_controller_remove_target", 00:05:18.125 "vhost_scsi_controller_add_target", 00:05:18.125 "vhost_start_scsi_controller", 00:05:18.125 "vhost_create_scsi_controller", 00:05:18.125 "thread_set_cpumask", 00:05:18.125 "scheduler_set_options", 00:05:18.125 "framework_get_governor", 00:05:18.125 "framework_get_scheduler", 00:05:18.125 "framework_set_scheduler", 00:05:18.125 "framework_get_reactors", 00:05:18.125 "thread_get_io_channels", 00:05:18.125 "thread_get_pollers", 00:05:18.125 "thread_get_stats", 00:05:18.125 "framework_monitor_context_switch", 00:05:18.125 "spdk_kill_instance", 00:05:18.125 "log_enable_timestamps", 00:05:18.125 "log_get_flags", 00:05:18.126 "log_clear_flag", 00:05:18.126 "log_set_flag", 00:05:18.126 "log_get_level", 00:05:18.126 "log_set_level", 00:05:18.126 "log_get_print_level", 00:05:18.126 "log_set_print_level", 00:05:18.126 "framework_enable_cpumask_locks", 00:05:18.126 "framework_disable_cpumask_locks", 00:05:18.126 "framework_wait_init", 00:05:18.126 "framework_start_init", 00:05:18.126 "scsi_get_devices", 00:05:18.126 "bdev_get_histogram", 00:05:18.126 "bdev_enable_histogram", 00:05:18.126 "bdev_set_qos_limit", 00:05:18.126 "bdev_set_qd_sampling_period", 00:05:18.126 "bdev_get_bdevs", 00:05:18.126 "bdev_reset_iostat", 00:05:18.126 "bdev_get_iostat", 00:05:18.126 "bdev_examine", 00:05:18.126 "bdev_wait_for_examine", 00:05:18.126 "bdev_set_options", 00:05:18.126 "accel_get_stats", 00:05:18.126 "accel_set_options", 00:05:18.126 "accel_set_driver", 00:05:18.126 "accel_crypto_key_destroy", 00:05:18.126 "accel_crypto_keys_get", 00:05:18.126 "accel_crypto_key_create", 00:05:18.126 "accel_assign_opc", 00:05:18.126 "accel_get_module_info", 00:05:18.126 "accel_get_opc_assignments", 00:05:18.126 "vmd_rescan", 00:05:18.126 "vmd_remove_device", 00:05:18.126 "vmd_enable", 00:05:18.126 "sock_get_default_impl", 00:05:18.126 "sock_set_default_impl", 00:05:18.126 "sock_impl_set_options", 00:05:18.126 "sock_impl_get_options", 00:05:18.126 "iobuf_get_stats", 00:05:18.126 "iobuf_set_options", 00:05:18.126 "keyring_get_keys", 00:05:18.126 "vfu_tgt_set_base_path", 00:05:18.126 "framework_get_pci_devices", 00:05:18.126 "framework_get_config", 00:05:18.126 "framework_get_subsystems", 00:05:18.126 "fsdev_set_opts", 00:05:18.126 "fsdev_get_opts", 00:05:18.126 "trace_get_info", 00:05:18.126 "trace_get_tpoint_group_mask", 00:05:18.126 "trace_disable_tpoint_group", 00:05:18.126 "trace_enable_tpoint_group", 00:05:18.126 "trace_clear_tpoint_mask", 00:05:18.126 "trace_set_tpoint_mask", 00:05:18.126 "notify_get_notifications", 00:05:18.126 "notify_get_types", 00:05:18.126 "spdk_get_version", 00:05:18.126 "rpc_get_methods" 00:05:18.126 ] 00:05:18.384 04:41:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:18.384 04:41:08 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.384 04:41:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.384 04:41:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:18.384 04:41:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2180683 00:05:18.384 04:41:08 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2180683 ']' 00:05:18.384 04:41:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2180683 00:05:18.384 04:41:08 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:18.384 04:41:08 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.384 04:41:08 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2180683 00:05:18.384 04:41:08 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.384 04:41:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.384 04:41:08 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2180683' 00:05:18.384 killing process with pid 2180683 00:05:18.384 04:41:08 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2180683 00:05:18.384 04:41:08 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2180683 00:05:18.668 00:05:18.668 real 0m1.890s 00:05:18.668 user 0m3.529s 00:05:18.668 sys 0m0.513s 00:05:18.668 04:41:09 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.668 04:41:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.669 ************************************ 00:05:18.669 END TEST spdkcli_tcp 00:05:18.669 ************************************ 00:05:18.669 04:41:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:18.669 04:41:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.669 04:41:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.669 04:41:09 -- common/autotest_common.sh@10 -- # set +x 00:05:18.669 ************************************ 00:05:18.669 START TEST dpdk_mem_utility 00:05:18.669 ************************************ 00:05:18.669 04:41:09 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:18.948 * Looking for test storage... 00:05:18.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:18.948 04:41:09 dpdk_mem_utility -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:18.948 04:41:09 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lcov --version 00:05:18.948 04:41:09 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:18.948 04:41:09 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.948 04:41:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:18.949 04:41:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.949 04:41:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:18.949 04:41:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:18.949 04:41:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.949 04:41:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:18.949 04:41:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.949 04:41:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.949 04:41:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.949 04:41:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:18.949 04:41:09 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.949 04:41:09 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:18.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.949 --rc genhtml_branch_coverage=1 00:05:18.949 --rc genhtml_function_coverage=1 00:05:18.949 --rc genhtml_legend=1 00:05:18.949 --rc geninfo_all_blocks=1 00:05:18.949 --rc geninfo_unexecuted_blocks=1 00:05:18.949 00:05:18.949 ' 00:05:18.949 04:41:09 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:18.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.949 --rc genhtml_branch_coverage=1 00:05:18.949 --rc genhtml_function_coverage=1 00:05:18.949 --rc genhtml_legend=1 00:05:18.949 --rc geninfo_all_blocks=1 00:05:18.949 --rc geninfo_unexecuted_blocks=1 00:05:18.949 00:05:18.949 ' 00:05:18.949 04:41:09 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:18.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.949 --rc genhtml_branch_coverage=1 00:05:18.949 --rc genhtml_function_coverage=1 00:05:18.949 --rc genhtml_legend=1 00:05:18.949 --rc geninfo_all_blocks=1 00:05:18.949 --rc geninfo_unexecuted_blocks=1 00:05:18.949 00:05:18.949 ' 00:05:18.949 04:41:09 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:18.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.949 --rc genhtml_branch_coverage=1 00:05:18.949 --rc genhtml_function_coverage=1 00:05:18.949 --rc genhtml_legend=1 00:05:18.949 --rc geninfo_all_blocks=1 00:05:18.949 --rc geninfo_unexecuted_blocks=1 00:05:18.949 00:05:18.949 ' 00:05:18.949 04:41:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:18.949 04:41:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2181020 00:05:18.949 04:41:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.949 04:41:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2181020 00:05:18.949 04:41:09 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2181020 ']' 00:05:18.949 04:41:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.949 04:41:09 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.949 04:41:09 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.949 04:41:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.949 04:41:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.949 [2024-10-28 04:41:09.424896] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:18.949 [2024-10-28 04:41:09.425007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181020 ] 00:05:19.207 [2024-10-28 04:41:09.557030] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:19.207 [2024-10-28 04:41:09.591603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.207 [2024-10-28 04:41:09.640211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.141 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.141 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:20.141 04:41:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:20.141 04:41:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:20.141 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.141 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.141 { 00:05:20.141 "filename": "/tmp/spdk_mem_dump.txt" 00:05:20.141 } 00:05:20.141 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.141 04:41:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:20.141 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:20.141 1 heaps totaling size 810.000000 MiB 00:05:20.141 size: 810.000000 MiB heap id: 0 00:05:20.141 end heaps---------- 00:05:20.141 9 mempools totaling size 595.772034 MiB 00:05:20.141 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:20.141 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:20.141 size: 92.545471 MiB name: bdev_io_2181020 00:05:20.141 size: 50.003479 MiB name: msgpool_2181020 00:05:20.141 size: 36.509338 MiB name: fsdev_io_2181020 00:05:20.141 size: 21.763794 MiB name: PDU_Pool 00:05:20.141 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:20.141 size: 4.133484 MiB name: evtpool_2181020 00:05:20.141 size: 0.026123 MiB name: Session_Pool 00:05:20.141 end mempools------- 00:05:20.141 6 memzones totaling size 4.142822 MiB 00:05:20.141 size: 1.000366 MiB name: RG_ring_0_2181020 00:05:20.141 size: 1.000366 MiB name: RG_ring_1_2181020 00:05:20.141 size: 1.000366 MiB name: RG_ring_4_2181020 00:05:20.141 size: 1.000366 MiB name: RG_ring_5_2181020 00:05:20.141 size: 0.125366 MiB name: RG_ring_2_2181020 00:05:20.141 size: 0.015991 MiB name: RG_ring_3_2181020 00:05:20.141 end memzones------- 00:05:20.141 04:41:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:20.141 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:20.141 list of free elements. size: 10.745300 MiB 00:05:20.141 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:20.141 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:20.141 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:20.141 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:20.142 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:20.142 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:20.142 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:20.142 element at address: 0x200000200000 with size: 0.600159 MiB 00:05:20.142 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:20.142 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:20.142 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:20.142 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:20.142 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:20.142 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:20.142 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:20.142 list of standard malloc elements. size: 199.335815 MiB 00:05:20.142 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:20.142 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:20.142 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:20.142 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:20.142 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:20.142 element at address: 0x2000003bbf00 with size: 0.257935 MiB 00:05:20.142 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:20.142 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:20.142 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:20.142 element at address: 0x2000002b9c40 with size: 0.000183 MiB 00:05:20.142 element at address: 0x2000003bbe40 with size: 0.000183 MiB 00:05:20.142 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:20.142 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:20.142 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:20.142 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:20.142 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:20.142 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:20.142 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:20.142 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:20.142 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:20.142 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:20.142 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:20.142 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:20.142 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:20.142 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:20.142 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:20.142 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:20.142 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:20.142 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:20.142 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:20.142 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:20.142 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:20.142 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:20.142 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:20.142 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:20.142 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:20.142 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:20.142 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:20.142 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:20.142 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:20.142 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:20.142 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:20.142 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:20.142 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:20.142 list of memzone associated elements. size: 599.918884 MiB 00:05:20.142 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:20.142 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:20.142 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:20.142 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:20.142 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:20.142 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2181020_0 00:05:20.142 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:20.142 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2181020_0 00:05:20.142 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:20.142 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2181020_0 00:05:20.142 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:20.142 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:20.142 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:20.142 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:20.142 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:20.142 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2181020_0 00:05:20.142 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:20.142 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2181020 00:05:20.142 element at address: 0x2000002b9d00 with size: 1.008118 MiB 00:05:20.142 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2181020 00:05:20.142 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:20.142 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:20.142 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:20.142 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:20.142 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:20.142 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:20.142 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:20.142 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:20.142 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:20.142 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2181020 00:05:20.142 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:20.142 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2181020 00:05:20.142 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:20.142 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2181020 00:05:20.142 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:20.142 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2181020 00:05:20.142 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:20.142 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2181020 00:05:20.142 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:20.142 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2181020 00:05:20.142 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:20.142 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:20.142 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:20.142 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:20.142 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:20.142 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:20.142 element at address: 0x200000299a40 with size: 0.125488 MiB 00:05:20.142 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2181020 00:05:20.142 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:20.142 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2181020 00:05:20.142 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:20.142 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:20.142 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:20.142 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:20.142 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:20.142 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2181020 00:05:20.142 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:20.142 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:20.142 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:20.142 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2181020 00:05:20.142 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:20.142 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2181020 00:05:20.142 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:20.142 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2181020 00:05:20.142 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:20.142 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:20.142 04:41:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:20.142 04:41:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2181020 00:05:20.142 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2181020 ']' 00:05:20.142 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2181020 00:05:20.142 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:20.142 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.142 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2181020 00:05:20.142 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.142 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.142 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2181020' 00:05:20.142 killing process with pid 2181020 00:05:20.142 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2181020 00:05:20.142 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2181020 00:05:20.401 00:05:20.401 real 0m1.760s 00:05:20.401 user 0m1.867s 00:05:20.401 sys 0m0.468s 00:05:20.401 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.401 04:41:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.401 ************************************ 00:05:20.401 END TEST dpdk_mem_utility 00:05:20.401 ************************************ 00:05:20.660 04:41:11 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:20.660 04:41:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.660 04:41:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.660 04:41:11 -- common/autotest_common.sh@10 -- # set +x 00:05:20.660 ************************************ 00:05:20.660 START TEST event 00:05:20.660 ************************************ 00:05:20.660 04:41:11 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:20.660 * Looking for test storage... 00:05:20.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:20.660 04:41:11 event -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:20.660 04:41:11 event -- common/autotest_common.sh@1689 -- # lcov --version 00:05:20.660 04:41:11 event -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:20.660 04:41:11 event -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:20.660 04:41:11 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.660 04:41:11 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.660 04:41:11 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.660 04:41:11 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.660 04:41:11 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.660 04:41:11 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.660 04:41:11 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.660 04:41:11 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.660 04:41:11 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.660 04:41:11 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.660 04:41:11 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.660 04:41:11 event -- scripts/common.sh@344 -- # case "$op" in 00:05:20.660 04:41:11 event -- scripts/common.sh@345 -- # : 1 00:05:20.660 04:41:11 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.660 04:41:11 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.660 04:41:11 event -- scripts/common.sh@365 -- # decimal 1 00:05:20.660 04:41:11 event -- scripts/common.sh@353 -- # local d=1 00:05:20.660 04:41:11 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.660 04:41:11 event -- scripts/common.sh@355 -- # echo 1 00:05:20.660 04:41:11 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.660 04:41:11 event -- scripts/common.sh@366 -- # decimal 2 00:05:20.660 04:41:11 event -- scripts/common.sh@353 -- # local d=2 00:05:20.660 04:41:11 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.660 04:41:11 event -- scripts/common.sh@355 -- # echo 2 00:05:20.660 04:41:11 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.660 04:41:11 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.660 04:41:11 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.660 04:41:11 event -- scripts/common.sh@368 -- # return 0 00:05:20.660 04:41:11 event -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.660 04:41:11 event -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:20.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.660 --rc genhtml_branch_coverage=1 00:05:20.660 --rc genhtml_function_coverage=1 00:05:20.660 --rc genhtml_legend=1 00:05:20.660 --rc geninfo_all_blocks=1 00:05:20.660 --rc geninfo_unexecuted_blocks=1 00:05:20.660 00:05:20.660 ' 00:05:20.660 04:41:11 event -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:20.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.660 --rc genhtml_branch_coverage=1 00:05:20.660 --rc genhtml_function_coverage=1 00:05:20.660 --rc genhtml_legend=1 00:05:20.660 --rc geninfo_all_blocks=1 00:05:20.660 --rc geninfo_unexecuted_blocks=1 00:05:20.660 00:05:20.660 ' 00:05:20.660 04:41:11 event -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:20.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.660 --rc genhtml_branch_coverage=1 00:05:20.660 --rc genhtml_function_coverage=1 00:05:20.660 --rc genhtml_legend=1 00:05:20.660 --rc geninfo_all_blocks=1 00:05:20.660 --rc geninfo_unexecuted_blocks=1 00:05:20.660 00:05:20.660 ' 00:05:20.660 04:41:11 event -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:20.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.660 --rc genhtml_branch_coverage=1 00:05:20.660 --rc genhtml_function_coverage=1 00:05:20.660 --rc genhtml_legend=1 00:05:20.660 --rc geninfo_all_blocks=1 00:05:20.660 --rc geninfo_unexecuted_blocks=1 00:05:20.660 00:05:20.660 ' 00:05:20.660 04:41:11 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:20.660 04:41:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:20.660 04:41:11 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.660 04:41:11 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:20.661 04:41:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.661 04:41:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.661 ************************************ 00:05:20.661 START TEST event_perf 00:05:20.661 ************************************ 00:05:20.661 04:41:11 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.661 Running I/O for 1 seconds...[2024-10-28 04:41:11.217740] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:20.661 [2024-10-28 04:41:11.217805] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181342 ] 00:05:20.919 [2024-10-28 04:41:11.351568] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:20.919 [2024-10-28 04:41:11.392298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.919 [2024-10-28 04:41:11.448142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.919 [2024-10-28 04:41:11.448209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.919 [2024-10-28 04:41:11.448299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.919 [2024-10-28 04:41:11.448302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.293 Running I/O for 1 seconds... 00:05:22.293 lcore 0: 220905 00:05:22.293 lcore 1: 220905 00:05:22.293 lcore 2: 220904 00:05:22.293 lcore 3: 220905 00:05:22.293 done. 00:05:22.293 00:05:22.293 real 0m1.293s 00:05:22.293 user 0m4.104s 00:05:22.293 sys 0m0.077s 00:05:22.293 04:41:12 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.293 04:41:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.293 ************************************ 00:05:22.293 END TEST event_perf 00:05:22.293 ************************************ 00:05:22.293 04:41:12 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:22.293 04:41:12 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:22.293 04:41:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.293 04:41:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.293 ************************************ 00:05:22.293 START TEST event_reactor 00:05:22.293 ************************************ 00:05:22.293 04:41:12 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:22.293 [2024-10-28 04:41:12.562498] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:22.293 [2024-10-28 04:41:12.562563] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181500 ] 00:05:22.293 [2024-10-28 04:41:12.695408] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:22.293 [2024-10-28 04:41:12.736922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.293 [2024-10-28 04:41:12.785154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.665 test_start 00:05:23.665 oneshot 00:05:23.665 tick 100 00:05:23.665 tick 100 00:05:23.665 tick 250 00:05:23.665 tick 100 00:05:23.665 tick 100 00:05:23.665 tick 100 00:05:23.665 tick 250 00:05:23.665 tick 500 00:05:23.665 tick 100 00:05:23.665 tick 100 00:05:23.665 tick 250 00:05:23.665 tick 100 00:05:23.665 tick 100 00:05:23.665 test_end 00:05:23.665 00:05:23.665 real 0m1.282s 00:05:23.665 user 0m1.105s 00:05:23.665 sys 0m0.072s 00:05:23.665 04:41:13 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.665 04:41:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:23.665 ************************************ 00:05:23.665 END TEST event_reactor 00:05:23.665 ************************************ 00:05:23.665 04:41:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.665 04:41:13 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:23.665 04:41:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.665 04:41:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.665 ************************************ 00:05:23.665 START TEST event_reactor_perf 00:05:23.665 ************************************ 00:05:23.665 04:41:13 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.665 [2024-10-28 04:41:13.898576] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:23.665 [2024-10-28 04:41:13.898649] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181649 ] 00:05:23.665 [2024-10-28 04:41:14.031232] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:23.665 [2024-10-28 04:41:14.072663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.665 [2024-10-28 04:41:14.119274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.600 test_start 00:05:24.600 test_end 00:05:24.600 Performance: 354849 events per second 00:05:24.600 00:05:24.600 real 0m1.281s 00:05:24.600 user 0m1.115s 00:05:24.600 sys 0m0.061s 00:05:24.600 04:41:15 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.600 04:41:15 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:24.600 ************************************ 00:05:24.600 END TEST event_reactor_perf 00:05:24.600 ************************************ 00:05:24.600 04:41:15 event -- event/event.sh@49 -- # uname -s 00:05:24.600 04:41:15 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:24.600 04:41:15 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:24.600 04:41:15 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.600 04:41:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.600 04:41:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.859 ************************************ 00:05:24.859 START TEST event_scheduler 00:05:24.859 ************************************ 00:05:24.859 04:41:15 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:24.859 * Looking for test storage... 00:05:24.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:24.859 04:41:15 event.event_scheduler -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:24.859 04:41:15 event.event_scheduler -- common/autotest_common.sh@1689 -- # lcov --version 00:05:24.859 04:41:15 event.event_scheduler -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:24.859 04:41:15 event.event_scheduler -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.859 04:41:15 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:24.859 04:41:15 event.event_scheduler -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.859 04:41:15 event.event_scheduler -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:24.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.859 --rc genhtml_branch_coverage=1 00:05:24.859 --rc genhtml_function_coverage=1 00:05:24.859 --rc genhtml_legend=1 00:05:24.859 --rc geninfo_all_blocks=1 00:05:24.859 --rc geninfo_unexecuted_blocks=1 00:05:24.859 00:05:24.859 ' 00:05:24.859 04:41:15 event.event_scheduler -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:24.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.859 --rc genhtml_branch_coverage=1 00:05:24.859 --rc genhtml_function_coverage=1 00:05:24.859 --rc genhtml_legend=1 00:05:24.859 --rc geninfo_all_blocks=1 00:05:24.859 --rc geninfo_unexecuted_blocks=1 00:05:24.859 00:05:24.859 ' 00:05:24.859 04:41:15 event.event_scheduler -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:24.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.859 --rc genhtml_branch_coverage=1 00:05:24.859 --rc genhtml_function_coverage=1 00:05:24.859 --rc genhtml_legend=1 00:05:24.859 --rc geninfo_all_blocks=1 00:05:24.859 --rc geninfo_unexecuted_blocks=1 00:05:24.859 00:05:24.859 ' 00:05:24.859 04:41:15 event.event_scheduler -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:24.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.859 --rc genhtml_branch_coverage=1 00:05:24.859 --rc genhtml_function_coverage=1 00:05:24.859 --rc genhtml_legend=1 00:05:24.859 --rc geninfo_all_blocks=1 00:05:24.859 --rc geninfo_unexecuted_blocks=1 00:05:24.859 00:05:24.859 ' 00:05:24.859 04:41:15 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:24.859 04:41:15 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2181839 00:05:24.859 04:41:15 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:24.859 04:41:15 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.859 04:41:15 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2181839 00:05:24.859 04:41:15 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2181839 ']' 00:05:24.859 04:41:15 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.859 04:41:15 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.859 04:41:15 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.859 04:41:15 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.859 04:41:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.859 [2024-10-28 04:41:15.402767] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:24.859 [2024-10-28 04:41:15.402848] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181839 ] 00:05:25.117 [2024-10-28 04:41:15.534266] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:25.117 [2024-10-28 04:41:15.571146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.117 [2024-10-28 04:41:15.622490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.117 [2024-10-28 04:41:15.622545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.117 [2024-10-28 04:41:15.622613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.117 [2024-10-28 04:41:15.622617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.051 04:41:16 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.051 04:41:16 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:26.051 04:41:16 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:26.051 04:41:16 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.051 04:41:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.051 [2024-10-28 04:41:16.403514] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:26.051 [2024-10-28 04:41:16.403540] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:26.051 [2024-10-28 04:41:16.403572] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:26.052 [2024-10-28 04:41:16.403583] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:26.052 [2024-10-28 04:41:16.403594] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:26.052 04:41:16 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.052 04:41:16 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:26.052 04:41:16 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.052 04:41:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.052 [2024-10-28 04:41:16.500681] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:26.052 04:41:16 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.052 04:41:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:26.052 04:41:16 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.052 04:41:16 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.052 04:41:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.052 ************************************ 00:05:26.052 START TEST scheduler_create_thread 00:05:26.052 ************************************ 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.052 2 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.052 3 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.052 4 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.052 5 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.052 6 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.052 7 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.052 8 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.052 9 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.052 10 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.052 04:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.619 04:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.619 00:05:26.619 real 0m0.592s 00:05:26.619 user 0m0.010s 00:05:26.619 sys 0m0.004s 00:05:26.619 04:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.619 04:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.619 ************************************ 00:05:26.619 END TEST scheduler_create_thread 00:05:26.619 ************************************ 00:05:26.619 04:41:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:26.619 04:41:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2181839 00:05:26.619 04:41:17 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2181839 ']' 00:05:26.619 04:41:17 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2181839 00:05:26.619 04:41:17 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:26.619 04:41:17 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.619 04:41:17 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2181839 00:05:26.619 04:41:17 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:26.619 04:41:17 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:26.619 04:41:17 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2181839' 00:05:26.619 killing process with pid 2181839 00:05:26.619 04:41:17 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2181839 00:05:26.619 04:41:17 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2181839 00:05:27.186 [2024-10-28 04:41:17.602168] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:27.445 00:05:27.445 real 0m2.578s 00:05:27.445 user 0m5.395s 00:05:27.445 sys 0m0.381s 00:05:27.445 04:41:17 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.445 04:41:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.445 ************************************ 00:05:27.445 END TEST event_scheduler 00:05:27.445 ************************************ 00:05:27.445 04:41:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:27.445 04:41:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:27.445 04:41:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.445 04:41:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.445 04:41:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.445 ************************************ 00:05:27.445 START TEST app_repeat 00:05:27.445 ************************************ 00:05:27.445 04:41:17 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:27.445 04:41:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.445 04:41:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.445 04:41:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:27.445 04:41:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.445 04:41:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:27.445 04:41:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:27.445 04:41:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:27.445 04:41:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2182216 00:05:27.445 04:41:17 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:27.445 04:41:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.445 04:41:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2182216' 00:05:27.445 Process app_repeat pid: 2182216 00:05:27.445 04:41:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:27.445 04:41:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:27.445 spdk_app_start Round 0 00:05:27.445 04:41:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2182216 /var/tmp/spdk-nbd.sock 00:05:27.445 04:41:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2182216 ']' 00:05:27.445 04:41:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.445 04:41:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.445 04:41:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.445 04:41:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.445 04:41:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.445 [2024-10-28 04:41:17.875961] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:27.445 [2024-10-28 04:41:17.876036] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182216 ] 00:05:27.445 [2024-10-28 04:41:18.008712] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:27.703 [2024-10-28 04:41:18.044467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.703 [2024-10-28 04:41:18.092729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.703 [2024-10-28 04:41:18.092733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.703 04:41:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.703 04:41:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:27.703 04:41:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.962 Malloc0 00:05:27.962 04:41:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.220 Malloc1 00:05:28.220 04:41:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.220 04:41:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.220 04:41:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.220 04:41:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.220 04:41:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.220 04:41:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.220 04:41:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.220 04:41:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.220 04:41:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.220 04:41:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.220 04:41:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.220 04:41:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.220 04:41:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:28.220 04:41:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.220 04:41:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.220 04:41:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.786 /dev/nbd0 00:05:28.786 04:41:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.786 04:41:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.786 04:41:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:28.786 04:41:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:28.786 04:41:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:28.786 04:41:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:28.786 04:41:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:28.786 04:41:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:28.786 04:41:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:28.786 04:41:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:28.786 04:41:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.786 1+0 records in 00:05:28.786 1+0 records out 00:05:28.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000144866 s, 28.3 MB/s 00:05:28.786 04:41:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.786 04:41:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:28.786 04:41:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.786 04:41:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:28.786 04:41:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:28.786 04:41:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.786 04:41:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.786 04:41:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.044 /dev/nbd1 00:05:29.044 04:41:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.044 04:41:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.044 04:41:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:29.044 04:41:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:29.044 04:41:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:29.044 04:41:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:29.044 04:41:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:29.044 04:41:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:29.044 04:41:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:29.044 04:41:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:29.044 04:41:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.044 1+0 records in 00:05:29.044 1+0 records out 00:05:29.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227628 s, 18.0 MB/s 00:05:29.044 04:41:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.044 04:41:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:29.044 04:41:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.044 04:41:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:29.044 04:41:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:29.044 04:41:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.044 04:41:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.044 04:41:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.044 04:41:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.044 04:41:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.300 { 00:05:29.300 "nbd_device": "/dev/nbd0", 00:05:29.300 "bdev_name": "Malloc0" 00:05:29.300 }, 00:05:29.300 { 00:05:29.300 "nbd_device": "/dev/nbd1", 00:05:29.300 "bdev_name": "Malloc1" 00:05:29.300 } 00:05:29.300 ]' 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.300 { 00:05:29.300 "nbd_device": "/dev/nbd0", 00:05:29.300 "bdev_name": "Malloc0" 00:05:29.300 }, 00:05:29.300 { 00:05:29.300 "nbd_device": "/dev/nbd1", 00:05:29.300 "bdev_name": "Malloc1" 00:05:29.300 } 00:05:29.300 ]' 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:29.300 /dev/nbd1' 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:29.300 /dev/nbd1' 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:29.300 256+0 records in 00:05:29.300 256+0 records out 00:05:29.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385411 s, 272 MB/s 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:29.300 256+0 records in 00:05:29.300 256+0 records out 00:05:29.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196373 s, 53.4 MB/s 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:29.300 256+0 records in 00:05:29.300 256+0 records out 00:05:29.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232988 s, 45.0 MB/s 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.300 04:41:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.301 04:41:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:29.301 04:41:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.301 04:41:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:29.301 04:41:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:29.301 04:41:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.301 04:41:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:29.301 04:41:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.301 04:41:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:29.301 04:41:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:29.301 04:41:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:29.301 04:41:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.301 04:41:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.301 04:41:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:29.301 04:41:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:29.301 04:41:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.301 04:41:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.558 04:41:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.558 04:41:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.558 04:41:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.558 04:41:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.558 04:41:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.558 04:41:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.558 04:41:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.558 04:41:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.558 04:41:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.558 04:41:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.124 04:41:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.124 04:41:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.124 04:41:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.124 04:41:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.124 04:41:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.124 04:41:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.124 04:41:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.124 04:41:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.124 04:41:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.124 04:41:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.124 04:41:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.124 04:41:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.124 04:41:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.124 04:41:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.381 04:41:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.381 04:41:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.382 04:41:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.382 04:41:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:30.382 04:41:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.382 04:41:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.382 04:41:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.382 04:41:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.382 04:41:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.382 04:41:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.640 04:41:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.898 [2024-10-28 04:41:21.244029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.898 [2024-10-28 04:41:21.292190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.898 [2024-10-28 04:41:21.292196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.898 [2024-10-28 04:41:21.354996] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.898 [2024-10-28 04:41:21.355077] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.181 04:41:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.181 04:41:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:34.181 spdk_app_start Round 1 00:05:34.181 04:41:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2182216 /var/tmp/spdk-nbd.sock 00:05:34.181 04:41:24 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2182216 ']' 00:05:34.181 04:41:24 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.181 04:41:24 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.181 04:41:24 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.181 04:41:24 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.181 04:41:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.181 04:41:24 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.181 04:41:24 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:34.181 04:41:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.181 Malloc0 00:05:34.181 04:41:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.440 Malloc1 00:05:34.440 04:41:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.440 04:41:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.440 04:41:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.440 04:41:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:34.440 04:41:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.440 04:41:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:34.440 04:41:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.440 04:41:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.440 04:41:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.440 04:41:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:34.440 04:41:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.440 04:41:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:34.440 04:41:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:34.440 04:41:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:34.440 04:41:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.440 04:41:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:34.698 /dev/nbd0 00:05:34.698 04:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.698 04:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.698 04:41:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:34.698 04:41:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:34.698 04:41:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:34.698 04:41:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:34.698 04:41:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:34.698 04:41:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:34.698 04:41:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:34.698 04:41:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:34.698 04:41:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.698 1+0 records in 00:05:34.698 1+0 records out 00:05:34.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167463 s, 24.5 MB/s 00:05:34.698 04:41:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.698 04:41:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:34.698 04:41:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.698 04:41:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:34.698 04:41:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:34.698 04:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.698 04:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.698 04:41:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.956 /dev/nbd1 00:05:34.956 04:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.214 04:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.214 04:41:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:35.214 04:41:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:35.214 04:41:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:35.214 04:41:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:35.214 04:41:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:35.214 04:41:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:35.214 04:41:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:35.214 04:41:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:35.214 04:41:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.214 1+0 records in 00:05:35.214 1+0 records out 00:05:35.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189846 s, 21.6 MB/s 00:05:35.214 04:41:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.214 04:41:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:35.214 04:41:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.214 04:41:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:35.214 04:41:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:35.214 04:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.214 04:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.214 04:41:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.214 04:41:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.214 04:41:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.473 { 00:05:35.473 "nbd_device": "/dev/nbd0", 00:05:35.473 "bdev_name": "Malloc0" 00:05:35.473 }, 00:05:35.473 { 00:05:35.473 "nbd_device": "/dev/nbd1", 00:05:35.473 "bdev_name": "Malloc1" 00:05:35.473 } 00:05:35.473 ]' 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.473 { 00:05:35.473 "nbd_device": "/dev/nbd0", 00:05:35.473 "bdev_name": "Malloc0" 00:05:35.473 }, 00:05:35.473 { 00:05:35.473 "nbd_device": "/dev/nbd1", 00:05:35.473 "bdev_name": "Malloc1" 00:05:35.473 } 00:05:35.473 ]' 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.473 /dev/nbd1' 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.473 /dev/nbd1' 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.473 256+0 records in 00:05:35.473 256+0 records out 00:05:35.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00475549 s, 220 MB/s 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.473 256+0 records in 00:05:35.473 256+0 records out 00:05:35.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225939 s, 46.4 MB/s 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.473 256+0 records in 00:05:35.473 256+0 records out 00:05:35.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215195 s, 48.7 MB/s 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.473 04:41:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:35.731 04:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:35.731 04:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:35.731 04:41:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:35.731 04:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.732 04:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.732 04:41:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:35.732 04:41:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.732 04:41:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.732 04:41:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.732 04:41:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:35.990 04:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:35.990 04:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:35.990 04:41:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:35.990 04:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.990 04:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.990 04:41:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:35.990 04:41:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.990 04:41:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.990 04:41:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.990 04:41:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.990 04:41:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.556 04:41:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.556 04:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.556 04:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.556 04:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.556 04:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.556 04:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.556 04:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.556 04:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.556 04:41:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.556 04:41:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.556 04:41:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.556 04:41:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.556 04:41:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.815 04:41:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:36.815 [2024-10-28 04:41:27.403474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.073 [2024-10-28 04:41:27.451888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.073 [2024-10-28 04:41:27.451894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.073 [2024-10-28 04:41:27.515933] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.073 [2024-10-28 04:41:27.516024] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.350 04:41:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.350 04:41:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:40.350 spdk_app_start Round 2 00:05:40.350 04:41:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2182216 /var/tmp/spdk-nbd.sock 00:05:40.350 04:41:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2182216 ']' 00:05:40.350 04:41:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.350 04:41:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.350 04:41:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.350 04:41:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.350 04:41:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.350 04:41:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.350 04:41:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:40.350 04:41:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.350 Malloc0 00:05:40.350 04:41:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.606 Malloc1 00:05:40.606 04:41:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.606 04:41:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.606 04:41:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.606 04:41:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.606 04:41:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.606 04:41:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.606 04:41:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.606 04:41:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.606 04:41:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.606 04:41:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.606 04:41:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.606 04:41:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.606 04:41:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.606 04:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.606 04:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.606 04:41:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.863 /dev/nbd0 00:05:40.863 04:41:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.863 04:41:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.863 04:41:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:40.863 04:41:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:40.863 04:41:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:40.863 04:41:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:40.863 04:41:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:40.863 04:41:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:40.863 04:41:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:40.863 04:41:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:40.863 04:41:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.863 1+0 records in 00:05:40.863 1+0 records out 00:05:40.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000146149 s, 28.0 MB/s 00:05:40.863 04:41:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.863 04:41:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:40.863 04:41:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.863 04:41:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:40.863 04:41:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:40.863 04:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.863 04:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.863 04:41:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.428 /dev/nbd1 00:05:41.428 04:41:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.428 04:41:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.428 04:41:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:41.428 04:41:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:41.428 04:41:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:41.428 04:41:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:41.428 04:41:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:41.428 04:41:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:41.428 04:41:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:41.428 04:41:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:41.428 04:41:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.428 1+0 records in 00:05:41.428 1+0 records out 00:05:41.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213256 s, 19.2 MB/s 00:05:41.428 04:41:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.428 04:41:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:41.428 04:41:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.428 04:41:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:41.428 04:41:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:41.428 04:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.428 04:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.428 04:41:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.428 04:41:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.428 04:41:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.428 04:41:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.428 { 00:05:41.428 "nbd_device": "/dev/nbd0", 00:05:41.428 "bdev_name": "Malloc0" 00:05:41.428 }, 00:05:41.428 { 00:05:41.428 "nbd_device": "/dev/nbd1", 00:05:41.428 "bdev_name": "Malloc1" 00:05:41.428 } 00:05:41.428 ]' 00:05:41.428 04:41:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.428 { 00:05:41.428 "nbd_device": "/dev/nbd0", 00:05:41.428 "bdev_name": "Malloc0" 00:05:41.428 }, 00:05:41.428 { 00:05:41.428 "nbd_device": "/dev/nbd1", 00:05:41.428 "bdev_name": "Malloc1" 00:05:41.428 } 00:05:41.428 ]' 00:05:41.428 04:41:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.686 /dev/nbd1' 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.686 /dev/nbd1' 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.686 256+0 records in 00:05:41.686 256+0 records out 00:05:41.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515333 s, 203 MB/s 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.686 256+0 records in 00:05:41.686 256+0 records out 00:05:41.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202803 s, 51.7 MB/s 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.686 256+0 records in 00:05:41.686 256+0 records out 00:05:41.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238246 s, 44.0 MB/s 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.686 04:41:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.944 04:41:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.944 04:41:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.944 04:41:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.945 04:41:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.945 04:41:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.945 04:41:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.945 04:41:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.945 04:41:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.945 04:41:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.945 04:41:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.202 04:41:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.202 04:41:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.202 04:41:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.202 04:41:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.202 04:41:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.202 04:41:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.202 04:41:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.202 04:41:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.202 04:41:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.202 04:41:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.202 04:41:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.460 04:41:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.460 04:41:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.460 04:41:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.460 04:41:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.460 04:41:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.460 04:41:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.460 04:41:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.460 04:41:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.460 04:41:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.460 04:41:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.460 04:41:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.460 04:41:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.460 04:41:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.025 04:41:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:43.025 [2024-10-28 04:41:33.514026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.025 [2024-10-28 04:41:33.562167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.025 [2024-10-28 04:41:33.562172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.284 [2024-10-28 04:41:33.625253] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.284 [2024-10-28 04:41:33.625319] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.809 04:41:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2182216 /var/tmp/spdk-nbd.sock 00:05:45.809 04:41:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2182216 ']' 00:05:45.809 04:41:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.809 04:41:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.809 04:41:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.809 04:41:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.810 04:41:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.068 04:41:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.068 04:41:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:46.068 04:41:36 event.app_repeat -- event/event.sh@39 -- # killprocess 2182216 00:05:46.068 04:41:36 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2182216 ']' 00:05:46.068 04:41:36 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2182216 00:05:46.068 04:41:36 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:46.068 04:41:36 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.068 04:41:36 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2182216 00:05:46.068 04:41:36 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.068 04:41:36 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.068 04:41:36 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2182216' 00:05:46.068 killing process with pid 2182216 00:05:46.068 04:41:36 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2182216 00:05:46.068 04:41:36 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2182216 00:05:46.326 spdk_app_start is called in Round 0. 00:05:46.326 Shutdown signal received, stop current app iteration 00:05:46.326 Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 reinitialization... 00:05:46.326 spdk_app_start is called in Round 1. 00:05:46.326 Shutdown signal received, stop current app iteration 00:05:46.326 Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 reinitialization... 00:05:46.326 spdk_app_start is called in Round 2. 00:05:46.326 Shutdown signal received, stop current app iteration 00:05:46.326 Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 reinitialization... 00:05:46.326 spdk_app_start is called in Round 3. 00:05:46.326 Shutdown signal received, stop current app iteration 00:05:46.326 04:41:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:46.326 04:41:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:46.326 00:05:46.326 real 0m18.953s 00:05:46.326 user 0m41.803s 00:05:46.326 sys 0m3.223s 00:05:46.326 04:41:36 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.326 04:41:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.326 ************************************ 00:05:46.326 END TEST app_repeat 00:05:46.326 ************************************ 00:05:46.326 04:41:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:46.326 04:41:36 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.326 04:41:36 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.326 04:41:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.326 04:41:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.326 ************************************ 00:05:46.326 START TEST cpu_locks 00:05:46.326 ************************************ 00:05:46.326 04:41:36 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.326 * Looking for test storage... 00:05:46.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:46.326 04:41:36 event.cpu_locks -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:46.326 04:41:36 event.cpu_locks -- common/autotest_common.sh@1689 -- # lcov --version 00:05:46.326 04:41:36 event.cpu_locks -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:46.584 04:41:36 event.cpu_locks -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:46.584 04:41:36 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.584 04:41:36 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.584 04:41:36 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.584 04:41:36 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.584 04:41:36 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.584 04:41:36 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.584 04:41:36 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.585 04:41:36 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:46.585 04:41:36 event.cpu_locks -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.585 04:41:36 event.cpu_locks -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:46.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.585 --rc genhtml_branch_coverage=1 00:05:46.585 --rc genhtml_function_coverage=1 00:05:46.585 --rc genhtml_legend=1 00:05:46.585 --rc geninfo_all_blocks=1 00:05:46.585 --rc geninfo_unexecuted_blocks=1 00:05:46.585 00:05:46.585 ' 00:05:46.585 04:41:36 event.cpu_locks -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:46.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.585 --rc genhtml_branch_coverage=1 00:05:46.585 --rc genhtml_function_coverage=1 00:05:46.585 --rc genhtml_legend=1 00:05:46.585 --rc geninfo_all_blocks=1 00:05:46.585 --rc geninfo_unexecuted_blocks=1 00:05:46.585 00:05:46.585 ' 00:05:46.585 04:41:36 event.cpu_locks -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:46.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.585 --rc genhtml_branch_coverage=1 00:05:46.585 --rc genhtml_function_coverage=1 00:05:46.585 --rc genhtml_legend=1 00:05:46.585 --rc geninfo_all_blocks=1 00:05:46.585 --rc geninfo_unexecuted_blocks=1 00:05:46.585 00:05:46.585 ' 00:05:46.585 04:41:36 event.cpu_locks -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:46.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.585 --rc genhtml_branch_coverage=1 00:05:46.585 --rc genhtml_function_coverage=1 00:05:46.585 --rc genhtml_legend=1 00:05:46.585 --rc geninfo_all_blocks=1 00:05:46.585 --rc geninfo_unexecuted_blocks=1 00:05:46.585 00:05:46.585 ' 00:05:46.585 04:41:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:46.585 04:41:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:46.585 04:41:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:46.585 04:41:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:46.585 04:41:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.585 04:41:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.585 04:41:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.585 ************************************ 00:05:46.585 START TEST default_locks 00:05:46.585 ************************************ 00:05:46.585 04:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:46.585 04:41:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2184707 00:05:46.585 04:41:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.585 04:41:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2184707 00:05:46.585 04:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2184707 ']' 00:05:46.585 04:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.585 04:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.585 04:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.585 04:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.585 04:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.585 [2024-10-28 04:41:37.070121] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:46.585 [2024-10-28 04:41:37.070215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2184707 ] 00:05:46.844 [2024-10-28 04:41:37.201561] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:46.844 [2024-10-28 04:41:37.236987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.844 [2024-10-28 04:41:37.286464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.779 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.779 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:47.779 04:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2184707 00:05:47.779 04:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2184707 00:05:47.779 04:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.779 lslocks: write error 00:05:47.779 04:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2184707 00:05:47.779 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2184707 ']' 00:05:47.779 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2184707 00:05:47.779 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:47.779 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.779 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2184707 00:05:48.037 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.037 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.037 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2184707' 00:05:48.037 killing process with pid 2184707 00:05:48.037 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2184707 00:05:48.037 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2184707 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2184707 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2184707 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2184707 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2184707 ']' 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2184707) - No such process 00:05:48.296 ERROR: process (pid: 2184707) is no longer running 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.296 00:05:48.296 real 0m1.764s 00:05:48.296 user 0m1.863s 00:05:48.296 sys 0m0.593s 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.296 04:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.296 ************************************ 00:05:48.296 END TEST default_locks 00:05:48.296 ************************************ 00:05:48.296 04:41:38 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:48.296 04:41:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.296 04:41:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.296 04:41:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.296 ************************************ 00:05:48.296 START TEST default_locks_via_rpc 00:05:48.296 ************************************ 00:05:48.296 04:41:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:48.296 04:41:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2184878 00:05:48.296 04:41:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.296 04:41:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2184878 00:05:48.296 04:41:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2184878 ']' 00:05:48.296 04:41:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.296 04:41:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.296 04:41:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.296 04:41:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.296 04:41:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.296 [2024-10-28 04:41:38.886606] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:48.296 [2024-10-28 04:41:38.886718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2184878 ] 00:05:48.554 [2024-10-28 04:41:39.019579] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:48.555 [2024-10-28 04:41:39.056600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.555 [2024-10-28 04:41:39.103467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2184878 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2184878 00:05:49.489 04:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.746 04:41:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2184878 00:05:49.746 04:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2184878 ']' 00:05:49.746 04:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2184878 00:05:49.746 04:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:49.746 04:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.746 04:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2184878 00:05:49.746 04:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.746 04:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.746 04:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2184878' 00:05:49.746 killing process with pid 2184878 00:05:49.746 04:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2184878 00:05:49.746 04:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2184878 00:05:50.312 00:05:50.312 real 0m1.803s 00:05:50.312 user 0m1.948s 00:05:50.312 sys 0m0.551s 00:05:50.312 04:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.312 04:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.312 ************************************ 00:05:50.312 END TEST default_locks_via_rpc 00:05:50.312 ************************************ 00:05:50.312 04:41:40 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:50.312 04:41:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.312 04:41:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.312 04:41:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.312 ************************************ 00:05:50.312 START TEST non_locking_app_on_locked_coremask 00:05:50.312 ************************************ 00:05:50.312 04:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:50.312 04:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2185164 00:05:50.312 04:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.312 04:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2185164 /var/tmp/spdk.sock 00:05:50.312 04:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2185164 ']' 00:05:50.312 04:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.312 04:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.312 04:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.312 04:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.312 04:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.312 [2024-10-28 04:41:40.742569] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:50.312 [2024-10-28 04:41:40.742684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185164 ] 00:05:50.312 [2024-10-28 04:41:40.874966] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:50.570 [2024-10-28 04:41:40.917937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.570 [2024-10-28 04:41:40.966160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.138 04:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.138 04:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:51.138 04:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2185299 00:05:51.138 04:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:51.138 04:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2185299 /var/tmp/spdk2.sock 00:05:51.138 04:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2185299 ']' 00:05:51.138 04:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.138 04:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.138 04:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.138 04:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.138 04:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.396 [2024-10-28 04:41:41.775113] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:51.396 [2024-10-28 04:41:41.775201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185299 ] 00:05:51.396 [2024-10-28 04:41:41.906462] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:51.396 [2024-10-28 04:41:41.988647] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.396 [2024-10-28 04:41:41.988679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.725 [2024-10-28 04:41:42.092280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.337 04:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.337 04:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:52.337 04:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2185164 00:05:52.337 04:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2185164 00:05:52.337 04:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.903 lslocks: write error 00:05:52.903 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2185164 00:05:52.903 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2185164 ']' 00:05:52.903 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2185164 00:05:52.903 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:52.903 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.903 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2185164 00:05:52.903 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.903 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.903 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2185164' 00:05:52.903 killing process with pid 2185164 00:05:52.903 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2185164 00:05:52.903 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2185164 00:05:53.470 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2185299 00:05:53.470 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2185299 ']' 00:05:53.470 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2185299 00:05:53.470 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:53.470 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.470 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2185299 00:05:53.470 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.470 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.470 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2185299' 00:05:53.470 killing process with pid 2185299 00:05:53.470 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2185299 00:05:53.729 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2185299 00:05:53.987 00:05:53.987 real 0m3.791s 00:05:53.987 user 0m4.094s 00:05:53.987 sys 0m1.113s 00:05:53.987 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.987 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.987 ************************************ 00:05:53.987 END TEST non_locking_app_on_locked_coremask 00:05:53.987 ************************************ 00:05:53.987 04:41:44 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:53.987 04:41:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.987 04:41:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.987 04:41:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.987 ************************************ 00:05:53.987 START TEST locking_app_on_unlocked_coremask 00:05:53.987 ************************************ 00:05:53.987 04:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:53.987 04:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2185602 00:05:53.988 04:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:53.988 04:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2185602 /var/tmp/spdk.sock 00:05:53.988 04:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2185602 ']' 00:05:53.988 04:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.988 04:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.988 04:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.988 04:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.988 04:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.246 [2024-10-28 04:41:44.587475] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:54.246 [2024-10-28 04:41:44.587551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185602 ] 00:05:54.246 [2024-10-28 04:41:44.721413] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:54.246 [2024-10-28 04:41:44.763747] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.246 [2024-10-28 04:41:44.763774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.246 [2024-10-28 04:41:44.813768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.180 04:41:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.180 04:41:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:55.180 04:41:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2185735 00:05:55.180 04:41:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.180 04:41:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2185735 /var/tmp/spdk2.sock 00:05:55.180 04:41:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2185735 ']' 00:05:55.180 04:41:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.180 04:41:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.180 04:41:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.180 04:41:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.180 04:41:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.180 [2024-10-28 04:41:45.626002] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:55.180 [2024-10-28 04:41:45.626089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185735 ] 00:05:55.180 [2024-10-28 04:41:45.756551] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:55.438 [2024-10-28 04:41:45.839470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.438 [2024-10-28 04:41:45.942686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.372 04:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.372 04:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:56.372 04:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2185735 00:05:56.372 04:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2185735 00:05:56.372 04:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.630 lslocks: write error 00:05:56.630 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2185602 00:05:56.630 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2185602 ']' 00:05:56.630 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2185602 00:05:56.630 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:56.630 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.630 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2185602 00:05:56.630 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.630 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.630 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2185602' 00:05:56.630 killing process with pid 2185602 00:05:56.631 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2185602 00:05:56.631 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2185602 00:05:57.566 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2185735 00:05:57.566 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2185735 ']' 00:05:57.566 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2185735 00:05:57.566 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:57.566 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.566 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2185735 00:05:57.566 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.566 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.566 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2185735' 00:05:57.566 killing process with pid 2185735 00:05:57.566 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2185735 00:05:57.566 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2185735 00:05:57.825 00:05:57.825 real 0m3.864s 00:05:57.825 user 0m4.169s 00:05:57.825 sys 0m1.134s 00:05:57.825 04:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.825 04:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.825 ************************************ 00:05:57.825 END TEST locking_app_on_unlocked_coremask 00:05:57.825 ************************************ 00:05:58.084 04:41:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:58.084 04:41:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.084 04:41:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.084 04:41:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.084 ************************************ 00:05:58.084 START TEST locking_app_on_locked_coremask 00:05:58.084 ************************************ 00:05:58.084 04:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:58.084 04:41:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2186150 00:05:58.084 04:41:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.084 04:41:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2186150 /var/tmp/spdk.sock 00:05:58.084 04:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2186150 ']' 00:05:58.084 04:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.084 04:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.084 04:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.084 04:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.084 04:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.084 [2024-10-28 04:41:48.502028] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:58.084 [2024-10-28 04:41:48.502124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186150 ] 00:05:58.084 [2024-10-28 04:41:48.635357] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:58.084 [2024-10-28 04:41:48.671262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.342 [2024-10-28 04:41:48.720844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2186282 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2186282 /var/tmp/spdk2.sock 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2186282 /var/tmp/spdk2.sock 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2186282 /var/tmp/spdk2.sock 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2186282 ']' 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.909 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.166 [2024-10-28 04:41:49.542438] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:05:59.166 [2024-10-28 04:41:49.542521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186282 ] 00:05:59.166 [2024-10-28 04:41:49.683333] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.423 [2024-10-28 04:41:49.764485] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2186150 has claimed it. 00:05:59.423 [2024-10-28 04:41:49.764535] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2186282) - No such process 00:05:59.680 ERROR: process (pid: 2186282) is no longer running 00:05:59.680 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.680 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:59.680 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:59.680 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.680 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:59.680 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.680 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2186150 00:05:59.680 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2186150 00:05:59.680 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.938 lslocks: write error 00:05:59.938 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2186150 00:05:59.938 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2186150 ']' 00:05:59.938 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2186150 00:05:59.938 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:59.938 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.938 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2186150 00:06:00.196 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.196 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.196 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2186150' 00:06:00.196 killing process with pid 2186150 00:06:00.196 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2186150 00:06:00.196 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2186150 00:06:00.455 00:06:00.455 real 0m2.509s 00:06:00.455 user 0m2.822s 00:06:00.455 sys 0m0.672s 00:06:00.455 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.455 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.455 ************************************ 00:06:00.455 END TEST locking_app_on_locked_coremask 00:06:00.455 ************************************ 00:06:00.455 04:41:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:00.455 04:41:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.455 04:41:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.455 04:41:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.455 ************************************ 00:06:00.455 START TEST locking_overlapped_coremask 00:06:00.455 ************************************ 00:06:00.455 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:00.455 04:41:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2186450 00:06:00.455 04:41:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:00.455 04:41:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2186450 /var/tmp/spdk.sock 00:06:00.455 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2186450 ']' 00:06:00.455 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.455 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.455 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.455 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.455 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.714 [2024-10-28 04:41:51.061092] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:06:00.714 [2024-10-28 04:41:51.061183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186450 ] 00:06:00.714 [2024-10-28 04:41:51.193765] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.714 [2024-10-28 04:41:51.233782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.714 [2024-10-28 04:41:51.287776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.714 [2024-10-28 04:41:51.287831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.714 [2024-10-28 04:41:51.287850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2186585 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2186585 /var/tmp/spdk2.sock 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2186585 /var/tmp/spdk2.sock 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2186585 /var/tmp/spdk2.sock 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2186585 ']' 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.648 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.648 [2024-10-28 04:41:52.082894] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:06:01.648 [2024-10-28 04:41:52.082988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186585 ] 00:06:01.648 [2024-10-28 04:41:52.215670] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.906 [2024-10-28 04:41:52.288287] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2186450 has claimed it. 00:06:01.906 [2024-10-28 04:41:52.288344] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:02.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2186585) - No such process 00:06:02.471 ERROR: process (pid: 2186585) is no longer running 00:06:02.471 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2186450 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2186450 ']' 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2186450 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2186450 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2186450' 00:06:02.472 killing process with pid 2186450 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2186450 00:06:02.472 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2186450 00:06:02.731 00:06:02.731 real 0m2.228s 00:06:02.731 user 0m6.251s 00:06:02.731 sys 0m0.493s 00:06:02.731 04:41:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.731 04:41:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.731 ************************************ 00:06:02.731 END TEST locking_overlapped_coremask 00:06:02.731 ************************************ 00:06:02.731 04:41:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:02.731 04:41:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.731 04:41:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.731 04:41:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.731 ************************************ 00:06:02.731 START TEST locking_overlapped_coremask_via_rpc 00:06:02.731 ************************************ 00:06:02.731 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:02.731 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2186747 00:06:02.731 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:02.731 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2186747 /var/tmp/spdk.sock 00:06:02.731 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2186747 ']' 00:06:02.731 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.731 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.731 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.731 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.731 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.990 [2024-10-28 04:41:53.338986] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:06:02.990 [2024-10-28 04:41:53.339078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186747 ] 00:06:02.990 [2024-10-28 04:41:53.471257] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.990 [2024-10-28 04:41:53.516082] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.990 [2024-10-28 04:41:53.516122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.990 [2024-10-28 04:41:53.569741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.990 [2024-10-28 04:41:53.569796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.990 [2024-10-28 04:41:53.569799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.923 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.923 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:03.923 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2186882 00:06:03.923 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:03.923 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2186882 /var/tmp/spdk2.sock 00:06:03.923 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2186882 ']' 00:06:03.923 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.923 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.923 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.923 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.923 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.923 [2024-10-28 04:41:54.370978] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:06:03.923 [2024-10-28 04:41:54.371058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186882 ] 00:06:03.923 [2024-10-28 04:41:54.506421] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.181 [2024-10-28 04:41:54.578755] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.181 [2024-10-28 04:41:54.578783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.181 [2024-10-28 04:41:54.682594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.181 [2024-10-28 04:41:54.682666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:04.181 [2024-10-28 04:41:54.682669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.114 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.114 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:05.114 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:05.114 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.114 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.114 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.114 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.114 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:05.114 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.114 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:05.114 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.114 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:05.114 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.114 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.114 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.115 [2024-10-28 04:41:55.364737] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2186747 has claimed it. 00:06:05.115 request: 00:06:05.115 { 00:06:05.115 "method": "framework_enable_cpumask_locks", 00:06:05.115 "req_id": 1 00:06:05.115 } 00:06:05.115 Got JSON-RPC error response 00:06:05.115 response: 00:06:05.115 { 00:06:05.115 "code": -32603, 00:06:05.115 "message": "Failed to claim CPU core: 2" 00:06:05.115 } 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2186747 /var/tmp/spdk.sock 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2186747 ']' 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2186882 /var/tmp/spdk2.sock 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2186882 ']' 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.115 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.373 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.373 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:05.373 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:05.373 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.373 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.373 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.373 00:06:05.373 real 0m2.647s 00:06:05.373 user 0m1.356s 00:06:05.373 sys 0m0.215s 00:06:05.373 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.373 04:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.373 ************************************ 00:06:05.373 END TEST locking_overlapped_coremask_via_rpc 00:06:05.373 ************************************ 00:06:05.373 04:41:55 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:05.373 04:41:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2186747 ]] 00:06:05.373 04:41:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2186747 00:06:05.373 04:41:55 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2186747 ']' 00:06:05.373 04:41:55 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2186747 00:06:05.373 04:41:55 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:05.373 04:41:55 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.373 04:41:55 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2186747 00:06:05.631 04:41:55 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.631 04:41:55 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.631 04:41:55 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2186747' 00:06:05.631 killing process with pid 2186747 00:06:05.631 04:41:55 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2186747 00:06:05.631 04:41:55 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2186747 00:06:05.889 04:41:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2186882 ]] 00:06:05.889 04:41:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2186882 00:06:05.889 04:41:56 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2186882 ']' 00:06:05.889 04:41:56 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2186882 00:06:05.889 04:41:56 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:05.889 04:41:56 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.889 04:41:56 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2186882 00:06:05.889 04:41:56 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:05.889 04:41:56 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:05.889 04:41:56 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2186882' 00:06:05.889 killing process with pid 2186882 00:06:05.889 04:41:56 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2186882 00:06:05.889 04:41:56 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2186882 00:06:06.455 04:41:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:06.455 04:41:56 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:06.455 04:41:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2186747 ]] 00:06:06.455 04:41:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2186747 00:06:06.455 04:41:56 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2186747 ']' 00:06:06.455 04:41:56 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2186747 00:06:06.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2186747) - No such process 00:06:06.455 04:41:56 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2186747 is not found' 00:06:06.455 Process with pid 2186747 is not found 00:06:06.455 04:41:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2186882 ]] 00:06:06.455 04:41:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2186882 00:06:06.455 04:41:56 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2186882 ']' 00:06:06.455 04:41:56 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2186882 00:06:06.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2186882) - No such process 00:06:06.455 04:41:56 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2186882 is not found' 00:06:06.455 Process with pid 2186882 is not found 00:06:06.455 04:41:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:06.455 00:06:06.455 real 0m19.980s 00:06:06.455 user 0m35.327s 00:06:06.455 sys 0m5.727s 00:06:06.455 04:41:56 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.455 04:41:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.455 ************************************ 00:06:06.455 END TEST cpu_locks 00:06:06.455 ************************************ 00:06:06.455 00:06:06.455 real 0m45.824s 00:06:06.455 user 1m29.076s 00:06:06.455 sys 0m9.795s 00:06:06.455 04:41:56 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.455 04:41:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.455 ************************************ 00:06:06.455 END TEST event 00:06:06.455 ************************************ 00:06:06.456 04:41:56 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:06.456 04:41:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.456 04:41:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.456 04:41:56 -- common/autotest_common.sh@10 -- # set +x 00:06:06.456 ************************************ 00:06:06.456 START TEST thread 00:06:06.456 ************************************ 00:06:06.456 04:41:56 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:06.456 * Looking for test storage... 00:06:06.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:06.456 04:41:56 thread -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:06.456 04:41:56 thread -- common/autotest_common.sh@1689 -- # lcov --version 00:06:06.456 04:41:56 thread -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:06.456 04:41:57 thread -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:06.456 04:41:57 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.456 04:41:57 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.456 04:41:57 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.456 04:41:57 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.456 04:41:57 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.456 04:41:57 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.456 04:41:57 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.456 04:41:57 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.456 04:41:57 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.456 04:41:57 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.456 04:41:57 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.456 04:41:57 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:06.456 04:41:57 thread -- scripts/common.sh@345 -- # : 1 00:06:06.456 04:41:57 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.456 04:41:57 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.456 04:41:57 thread -- scripts/common.sh@365 -- # decimal 1 00:06:06.456 04:41:57 thread -- scripts/common.sh@353 -- # local d=1 00:06:06.456 04:41:57 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.456 04:41:57 thread -- scripts/common.sh@355 -- # echo 1 00:06:06.456 04:41:57 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.456 04:41:57 thread -- scripts/common.sh@366 -- # decimal 2 00:06:06.456 04:41:57 thread -- scripts/common.sh@353 -- # local d=2 00:06:06.456 04:41:57 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.456 04:41:57 thread -- scripts/common.sh@355 -- # echo 2 00:06:06.456 04:41:57 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.456 04:41:57 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.456 04:41:57 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.456 04:41:57 thread -- scripts/common.sh@368 -- # return 0 00:06:06.456 04:41:57 thread -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.456 04:41:57 thread -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:06.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.456 --rc genhtml_branch_coverage=1 00:06:06.456 --rc genhtml_function_coverage=1 00:06:06.456 --rc genhtml_legend=1 00:06:06.456 --rc geninfo_all_blocks=1 00:06:06.456 --rc geninfo_unexecuted_blocks=1 00:06:06.456 00:06:06.456 ' 00:06:06.456 04:41:57 thread -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:06.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.456 --rc genhtml_branch_coverage=1 00:06:06.456 --rc genhtml_function_coverage=1 00:06:06.456 --rc genhtml_legend=1 00:06:06.456 --rc geninfo_all_blocks=1 00:06:06.456 --rc geninfo_unexecuted_blocks=1 00:06:06.456 00:06:06.456 ' 00:06:06.456 04:41:57 thread -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:06.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.456 --rc genhtml_branch_coverage=1 00:06:06.456 --rc genhtml_function_coverage=1 00:06:06.456 --rc genhtml_legend=1 00:06:06.456 --rc geninfo_all_blocks=1 00:06:06.456 --rc geninfo_unexecuted_blocks=1 00:06:06.456 00:06:06.456 ' 00:06:06.456 04:41:57 thread -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:06.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.456 --rc genhtml_branch_coverage=1 00:06:06.456 --rc genhtml_function_coverage=1 00:06:06.456 --rc genhtml_legend=1 00:06:06.456 --rc geninfo_all_blocks=1 00:06:06.456 --rc geninfo_unexecuted_blocks=1 00:06:06.456 00:06:06.456 ' 00:06:06.456 04:41:57 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:06.456 04:41:57 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:06.456 04:41:57 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.456 04:41:57 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.714 ************************************ 00:06:06.714 START TEST thread_poller_perf 00:06:06.714 ************************************ 00:06:06.714 04:41:57 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:06.714 [2024-10-28 04:41:57.078878] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:06:06.714 [2024-10-28 04:41:57.078955] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187372 ] 00:06:06.714 [2024-10-28 04:41:57.210975] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.714 [2024-10-28 04:41:57.251041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.714 [2024-10-28 04:41:57.301438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.714 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:08.088 [2024-10-28T03:41:58.684Z] ====================================== 00:06:08.088 [2024-10-28T03:41:58.684Z] busy:2702691448 (cyc) 00:06:08.088 [2024-10-28T03:41:58.684Z] total_run_count: 291000 00:06:08.088 [2024-10-28T03:41:58.684Z] tsc_hz: 2693500000 (cyc) 00:06:08.088 [2024-10-28T03:41:58.684Z] ====================================== 00:06:08.088 [2024-10-28T03:41:58.684Z] poller_cost: 9287 (cyc), 3447 (nsec) 00:06:08.088 00:06:08.088 real 0m1.289s 00:06:08.088 user 0m1.113s 00:06:08.088 sys 0m0.071s 00:06:08.088 04:41:58 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.088 04:41:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.088 ************************************ 00:06:08.088 END TEST thread_poller_perf 00:06:08.088 ************************************ 00:06:08.088 04:41:58 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.088 04:41:58 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:08.088 04:41:58 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.088 04:41:58 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.088 ************************************ 00:06:08.088 START TEST thread_poller_perf 00:06:08.088 ************************************ 00:06:08.088 04:41:58 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.088 [2024-10-28 04:41:58.412688] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:06:08.088 [2024-10-28 04:41:58.412748] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187521 ] 00:06:08.088 [2024-10-28 04:41:58.543483] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:08.088 [2024-10-28 04:41:58.583544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.088 [2024-10-28 04:41:58.634075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.088 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:09.461 [2024-10-28T03:42:00.057Z] ====================================== 00:06:09.461 [2024-10-28T03:42:00.057Z] busy:2695959520 (cyc) 00:06:09.461 [2024-10-28T03:42:00.057Z] total_run_count: 3844000 00:06:09.461 [2024-10-28T03:42:00.057Z] tsc_hz: 2693500000 (cyc) 00:06:09.461 [2024-10-28T03:42:00.057Z] ====================================== 00:06:09.461 [2024-10-28T03:42:00.057Z] poller_cost: 701 (cyc), 260 (nsec) 00:06:09.461 00:06:09.461 real 0m1.279s 00:06:09.461 user 0m1.098s 00:06:09.461 sys 0m0.076s 00:06:09.461 04:41:59 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.461 04:41:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.461 ************************************ 00:06:09.461 END TEST thread_poller_perf 00:06:09.461 ************************************ 00:06:09.461 04:41:59 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:09.461 00:06:09.461 real 0m2.807s 00:06:09.461 user 0m2.333s 00:06:09.461 sys 0m0.276s 00:06:09.461 04:41:59 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.461 04:41:59 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.461 ************************************ 00:06:09.461 END TEST thread 00:06:09.461 ************************************ 00:06:09.461 04:41:59 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:09.461 04:41:59 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:09.461 04:41:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.461 04:41:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.461 04:41:59 -- common/autotest_common.sh@10 -- # set +x 00:06:09.461 ************************************ 00:06:09.461 START TEST app_cmdline 00:06:09.461 ************************************ 00:06:09.461 04:41:59 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:09.462 * Looking for test storage... 00:06:09.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:09.462 04:41:59 app_cmdline -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:09.462 04:41:59 app_cmdline -- common/autotest_common.sh@1689 -- # lcov --version 00:06:09.462 04:41:59 app_cmdline -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:09.462 04:41:59 app_cmdline -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.462 04:41:59 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:09.462 04:41:59 app_cmdline -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.462 04:41:59 app_cmdline -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:09.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.462 --rc genhtml_branch_coverage=1 00:06:09.462 --rc genhtml_function_coverage=1 00:06:09.462 --rc genhtml_legend=1 00:06:09.462 --rc geninfo_all_blocks=1 00:06:09.462 --rc geninfo_unexecuted_blocks=1 00:06:09.462 00:06:09.462 ' 00:06:09.462 04:41:59 app_cmdline -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:09.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.462 --rc genhtml_branch_coverage=1 00:06:09.462 --rc genhtml_function_coverage=1 00:06:09.462 --rc genhtml_legend=1 00:06:09.462 --rc geninfo_all_blocks=1 00:06:09.462 --rc geninfo_unexecuted_blocks=1 00:06:09.462 00:06:09.462 ' 00:06:09.462 04:41:59 app_cmdline -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:09.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.462 --rc genhtml_branch_coverage=1 00:06:09.462 --rc genhtml_function_coverage=1 00:06:09.462 --rc genhtml_legend=1 00:06:09.462 --rc geninfo_all_blocks=1 00:06:09.462 --rc geninfo_unexecuted_blocks=1 00:06:09.462 00:06:09.462 ' 00:06:09.462 04:41:59 app_cmdline -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:09.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.462 --rc genhtml_branch_coverage=1 00:06:09.462 --rc genhtml_function_coverage=1 00:06:09.462 --rc genhtml_legend=1 00:06:09.462 --rc geninfo_all_blocks=1 00:06:09.462 --rc geninfo_unexecuted_blocks=1 00:06:09.462 00:06:09.462 ' 00:06:09.462 04:41:59 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:09.462 04:41:59 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2187726 00:06:09.462 04:41:59 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:09.462 04:41:59 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2187726 00:06:09.462 04:41:59 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2187726 ']' 00:06:09.462 04:41:59 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.462 04:41:59 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.462 04:41:59 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.462 04:41:59 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.462 04:41:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.462 [2024-10-28 04:41:59.941075] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:06:09.462 [2024-10-28 04:41:59.941173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187726 ] 00:06:09.721 [2024-10-28 04:42:00.073506] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.721 [2024-10-28 04:42:00.109821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.721 [2024-10-28 04:42:00.158706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.671 04:42:00 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.671 04:42:00 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:10.671 04:42:00 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:10.671 { 00:06:10.671 "version": "SPDK v25.01-pre git sha1 169c3cd04", 00:06:10.671 "fields": { 00:06:10.671 "major": 25, 00:06:10.671 "minor": 1, 00:06:10.671 "patch": 0, 00:06:10.671 "suffix": "-pre", 00:06:10.671 "commit": "169c3cd04" 00:06:10.671 } 00:06:10.671 } 00:06:10.671 04:42:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:10.671 04:42:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:10.671 04:42:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:10.671 04:42:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:10.671 04:42:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:10.671 04:42:01 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.671 04:42:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.671 04:42:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:10.671 04:42:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:10.671 04:42:01 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.929 04:42:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:10.929 04:42:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:10.929 04:42:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.929 04:42:01 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:10.929 04:42:01 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.929 04:42:01 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.929 04:42:01 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.929 04:42:01 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.929 04:42:01 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.929 04:42:01 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.929 04:42:01 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.929 04:42:01 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.929 04:42:01 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:10.929 04:42:01 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.187 request: 00:06:11.187 { 00:06:11.187 "method": "env_dpdk_get_mem_stats", 00:06:11.187 "req_id": 1 00:06:11.187 } 00:06:11.187 Got JSON-RPC error response 00:06:11.187 response: 00:06:11.187 { 00:06:11.187 "code": -32601, 00:06:11.187 "message": "Method not found" 00:06:11.187 } 00:06:11.187 04:42:01 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:11.187 04:42:01 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.187 04:42:01 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:11.187 04:42:01 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.187 04:42:01 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2187726 00:06:11.187 04:42:01 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2187726 ']' 00:06:11.187 04:42:01 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2187726 00:06:11.187 04:42:01 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:11.187 04:42:01 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.187 04:42:01 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2187726 00:06:11.187 04:42:01 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.187 04:42:01 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.187 04:42:01 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2187726' 00:06:11.187 killing process with pid 2187726 00:06:11.187 04:42:01 app_cmdline -- common/autotest_common.sh@969 -- # kill 2187726 00:06:11.188 04:42:01 app_cmdline -- common/autotest_common.sh@974 -- # wait 2187726 00:06:11.446 00:06:11.446 real 0m2.223s 00:06:11.446 user 0m2.769s 00:06:11.446 sys 0m0.517s 00:06:11.446 04:42:01 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.446 04:42:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.446 ************************************ 00:06:11.446 END TEST app_cmdline 00:06:11.446 ************************************ 00:06:11.446 04:42:01 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:11.446 04:42:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.446 04:42:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.446 04:42:01 -- common/autotest_common.sh@10 -- # set +x 00:06:11.446 ************************************ 00:06:11.446 START TEST version 00:06:11.446 ************************************ 00:06:11.446 04:42:02 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:11.705 * Looking for test storage... 00:06:11.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:11.705 04:42:02 version -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:11.705 04:42:02 version -- common/autotest_common.sh@1689 -- # lcov --version 00:06:11.705 04:42:02 version -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:11.705 04:42:02 version -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:11.705 04:42:02 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.705 04:42:02 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.705 04:42:02 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.705 04:42:02 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.705 04:42:02 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.705 04:42:02 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.705 04:42:02 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.705 04:42:02 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.705 04:42:02 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.705 04:42:02 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.705 04:42:02 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.705 04:42:02 version -- scripts/common.sh@344 -- # case "$op" in 00:06:11.705 04:42:02 version -- scripts/common.sh@345 -- # : 1 00:06:11.705 04:42:02 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.705 04:42:02 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.705 04:42:02 version -- scripts/common.sh@365 -- # decimal 1 00:06:11.705 04:42:02 version -- scripts/common.sh@353 -- # local d=1 00:06:11.705 04:42:02 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.705 04:42:02 version -- scripts/common.sh@355 -- # echo 1 00:06:11.705 04:42:02 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.705 04:42:02 version -- scripts/common.sh@366 -- # decimal 2 00:06:11.705 04:42:02 version -- scripts/common.sh@353 -- # local d=2 00:06:11.705 04:42:02 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.705 04:42:02 version -- scripts/common.sh@355 -- # echo 2 00:06:11.705 04:42:02 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.705 04:42:02 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.705 04:42:02 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.705 04:42:02 version -- scripts/common.sh@368 -- # return 0 00:06:11.705 04:42:02 version -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.705 04:42:02 version -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:11.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.705 --rc genhtml_branch_coverage=1 00:06:11.705 --rc genhtml_function_coverage=1 00:06:11.705 --rc genhtml_legend=1 00:06:11.705 --rc geninfo_all_blocks=1 00:06:11.705 --rc geninfo_unexecuted_blocks=1 00:06:11.705 00:06:11.705 ' 00:06:11.705 04:42:02 version -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:11.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.705 --rc genhtml_branch_coverage=1 00:06:11.705 --rc genhtml_function_coverage=1 00:06:11.705 --rc genhtml_legend=1 00:06:11.705 --rc geninfo_all_blocks=1 00:06:11.705 --rc geninfo_unexecuted_blocks=1 00:06:11.705 00:06:11.705 ' 00:06:11.705 04:42:02 version -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:11.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.705 --rc genhtml_branch_coverage=1 00:06:11.705 --rc genhtml_function_coverage=1 00:06:11.705 --rc genhtml_legend=1 00:06:11.705 --rc geninfo_all_blocks=1 00:06:11.705 --rc geninfo_unexecuted_blocks=1 00:06:11.705 00:06:11.705 ' 00:06:11.705 04:42:02 version -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:11.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.705 --rc genhtml_branch_coverage=1 00:06:11.705 --rc genhtml_function_coverage=1 00:06:11.705 --rc genhtml_legend=1 00:06:11.705 --rc geninfo_all_blocks=1 00:06:11.705 --rc geninfo_unexecuted_blocks=1 00:06:11.705 00:06:11.705 ' 00:06:11.705 04:42:02 version -- app/version.sh@17 -- # get_header_version major 00:06:11.705 04:42:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:11.705 04:42:02 version -- app/version.sh@14 -- # cut -f2 00:06:11.705 04:42:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.705 04:42:02 version -- app/version.sh@17 -- # major=25 00:06:11.705 04:42:02 version -- app/version.sh@18 -- # get_header_version minor 00:06:11.705 04:42:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:11.705 04:42:02 version -- app/version.sh@14 -- # cut -f2 00:06:11.705 04:42:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.705 04:42:02 version -- app/version.sh@18 -- # minor=1 00:06:11.705 04:42:02 version -- app/version.sh@19 -- # get_header_version patch 00:06:11.705 04:42:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:11.705 04:42:02 version -- app/version.sh@14 -- # cut -f2 00:06:11.705 04:42:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.705 04:42:02 version -- app/version.sh@19 -- # patch=0 00:06:11.705 04:42:02 version -- app/version.sh@20 -- # get_header_version suffix 00:06:11.705 04:42:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:11.705 04:42:02 version -- app/version.sh@14 -- # cut -f2 00:06:11.705 04:42:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.705 04:42:02 version -- app/version.sh@20 -- # suffix=-pre 00:06:11.705 04:42:02 version -- app/version.sh@22 -- # version=25.1 00:06:11.705 04:42:02 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:11.705 04:42:02 version -- app/version.sh@28 -- # version=25.1rc0 00:06:11.705 04:42:02 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:11.705 04:42:02 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:11.705 04:42:02 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:11.705 04:42:02 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:11.705 00:06:11.705 real 0m0.202s 00:06:11.705 user 0m0.133s 00:06:11.705 sys 0m0.095s 00:06:11.705 04:42:02 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.705 04:42:02 version -- common/autotest_common.sh@10 -- # set +x 00:06:11.705 ************************************ 00:06:11.705 END TEST version 00:06:11.705 ************************************ 00:06:11.705 04:42:02 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:11.705 04:42:02 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:11.705 04:42:02 -- spdk/autotest.sh@194 -- # uname -s 00:06:11.705 04:42:02 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:11.705 04:42:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:11.705 04:42:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:11.705 04:42:02 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:11.705 04:42:02 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:11.705 04:42:02 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:11.705 04:42:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:11.705 04:42:02 -- common/autotest_common.sh@10 -- # set +x 00:06:11.705 04:42:02 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:11.705 04:42:02 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:11.705 04:42:02 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:11.705 04:42:02 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:11.705 04:42:02 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:11.705 04:42:02 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:11.705 04:42:02 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:11.705 04:42:02 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:11.705 04:42:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.705 04:42:02 -- common/autotest_common.sh@10 -- # set +x 00:06:11.964 ************************************ 00:06:11.964 START TEST nvmf_tcp 00:06:11.964 ************************************ 00:06:11.964 04:42:02 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:11.964 * Looking for test storage... 00:06:11.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:11.964 04:42:02 nvmf_tcp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:11.964 04:42:02 nvmf_tcp -- common/autotest_common.sh@1689 -- # lcov --version 00:06:11.964 04:42:02 nvmf_tcp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:11.964 04:42:02 nvmf_tcp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.964 04:42:02 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:11.964 04:42:02 nvmf_tcp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.964 04:42:02 nvmf_tcp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:11.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.964 --rc genhtml_branch_coverage=1 00:06:11.964 --rc genhtml_function_coverage=1 00:06:11.964 --rc genhtml_legend=1 00:06:11.964 --rc geninfo_all_blocks=1 00:06:11.964 --rc geninfo_unexecuted_blocks=1 00:06:11.964 00:06:11.964 ' 00:06:11.964 04:42:02 nvmf_tcp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:11.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.964 --rc genhtml_branch_coverage=1 00:06:11.964 --rc genhtml_function_coverage=1 00:06:11.964 --rc genhtml_legend=1 00:06:11.964 --rc geninfo_all_blocks=1 00:06:11.964 --rc geninfo_unexecuted_blocks=1 00:06:11.964 00:06:11.964 ' 00:06:11.964 04:42:02 nvmf_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:11.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.964 --rc genhtml_branch_coverage=1 00:06:11.964 --rc genhtml_function_coverage=1 00:06:11.964 --rc genhtml_legend=1 00:06:11.964 --rc geninfo_all_blocks=1 00:06:11.964 --rc geninfo_unexecuted_blocks=1 00:06:11.964 00:06:11.964 ' 00:06:11.964 04:42:02 nvmf_tcp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:11.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.964 --rc genhtml_branch_coverage=1 00:06:11.964 --rc genhtml_function_coverage=1 00:06:11.964 --rc genhtml_legend=1 00:06:11.964 --rc geninfo_all_blocks=1 00:06:11.964 --rc geninfo_unexecuted_blocks=1 00:06:11.964 00:06:11.964 ' 00:06:11.964 04:42:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:11.964 04:42:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:11.964 04:42:02 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:11.964 04:42:02 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:11.964 04:42:02 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.964 04:42:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.964 ************************************ 00:06:11.964 START TEST nvmf_target_core 00:06:11.964 ************************************ 00:06:11.964 04:42:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:11.964 * Looking for test storage... 00:06:11.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:11.964 04:42:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:11.964 04:42:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1689 -- # lcov --version 00:06:11.964 04:42:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.224 04:42:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:12.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.224 --rc genhtml_branch_coverage=1 00:06:12.224 --rc genhtml_function_coverage=1 00:06:12.225 --rc genhtml_legend=1 00:06:12.225 --rc geninfo_all_blocks=1 00:06:12.225 --rc geninfo_unexecuted_blocks=1 00:06:12.225 00:06:12.225 ' 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:12.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.225 --rc genhtml_branch_coverage=1 00:06:12.225 --rc genhtml_function_coverage=1 00:06:12.225 --rc genhtml_legend=1 00:06:12.225 --rc geninfo_all_blocks=1 00:06:12.225 --rc geninfo_unexecuted_blocks=1 00:06:12.225 00:06:12.225 ' 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:12.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.225 --rc genhtml_branch_coverage=1 00:06:12.225 --rc genhtml_function_coverage=1 00:06:12.225 --rc genhtml_legend=1 00:06:12.225 --rc geninfo_all_blocks=1 00:06:12.225 --rc geninfo_unexecuted_blocks=1 00:06:12.225 00:06:12.225 ' 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:12.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.225 --rc genhtml_branch_coverage=1 00:06:12.225 --rc genhtml_function_coverage=1 00:06:12.225 --rc genhtml_legend=1 00:06:12.225 --rc geninfo_all_blocks=1 00:06:12.225 --rc geninfo_unexecuted_blocks=1 00:06:12.225 00:06:12.225 ' 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:12.225 ************************************ 00:06:12.225 START TEST nvmf_abort 00:06:12.225 ************************************ 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:12.225 * Looking for test storage... 00:06:12.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1689 -- # lcov --version 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:12.225 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:12.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.226 --rc genhtml_branch_coverage=1 00:06:12.226 --rc genhtml_function_coverage=1 00:06:12.226 --rc genhtml_legend=1 00:06:12.226 --rc geninfo_all_blocks=1 00:06:12.226 --rc geninfo_unexecuted_blocks=1 00:06:12.226 00:06:12.226 ' 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:12.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.226 --rc genhtml_branch_coverage=1 00:06:12.226 --rc genhtml_function_coverage=1 00:06:12.226 --rc genhtml_legend=1 00:06:12.226 --rc geninfo_all_blocks=1 00:06:12.226 --rc geninfo_unexecuted_blocks=1 00:06:12.226 00:06:12.226 ' 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:12.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.226 --rc genhtml_branch_coverage=1 00:06:12.226 --rc genhtml_function_coverage=1 00:06:12.226 --rc genhtml_legend=1 00:06:12.226 --rc geninfo_all_blocks=1 00:06:12.226 --rc geninfo_unexecuted_blocks=1 00:06:12.226 00:06:12.226 ' 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:12.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.226 --rc genhtml_branch_coverage=1 00:06:12.226 --rc genhtml_function_coverage=1 00:06:12.226 --rc genhtml_legend=1 00:06:12.226 --rc geninfo_all_blocks=1 00:06:12.226 --rc geninfo_unexecuted_blocks=1 00:06:12.226 00:06:12.226 ' 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:12.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:12.226 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:12.227 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.227 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:12.227 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:12.227 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:12.227 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:12.227 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:12.227 04:42:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:14.759 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:14.759 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.759 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:14.759 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:14.760 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:14.760 04:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:14.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:14.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:06:14.760 00:06:14.760 --- 10.0.0.2 ping statistics --- 00:06:14.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.760 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:14.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:14.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:06:14.760 00:06:14.760 --- 10.0.0.1 ping statistics --- 00:06:14.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.760 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=2190049 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 2190049 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2190049 ']' 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.760 04:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:14.760 [2024-10-28 04:42:05.159008] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:06:14.760 [2024-10-28 04:42:05.159102] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.760 [2024-10-28 04:42:05.298442] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.760 [2024-10-28 04:42:05.333191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.019 [2024-10-28 04:42:05.382701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:15.019 [2024-10-28 04:42:05.382753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:15.019 [2024-10-28 04:42:05.382768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.019 [2024-10-28 04:42:05.382779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.019 [2024-10-28 04:42:05.382788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:15.019 [2024-10-28 04:42:05.384354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.019 [2024-10-28 04:42:05.384449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.019 [2024-10-28 04:42:05.384452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.585 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.585 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:15.585 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:15.585 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.585 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.844 [2024-10-28 04:42:06.199888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.844 Malloc0 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.844 Delay0 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.844 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:15.845 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.845 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.845 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.845 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:15.845 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.845 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.845 [2024-10-28 04:42:06.277713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.845 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.845 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:15.845 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.845 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.845 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.845 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:16.103 [2024-10-28 04:42:06.493744] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:18.000 Initializing NVMe Controllers 00:06:18.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:18.000 controller IO queue size 128 less than required 00:06:18.000 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:18.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:18.000 Initialization complete. Launching workers. 00:06:18.000 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 26781 00:06:18.000 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26842, failed to submit 62 00:06:18.000 success 26785, unsuccessful 57, failed 0 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:18.000 rmmod nvme_tcp 00:06:18.000 rmmod nvme_fabrics 00:06:18.000 rmmod nvme_keyring 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 2190049 ']' 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 2190049 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2190049 ']' 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2190049 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.000 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2190049 00:06:18.258 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:18.259 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:18.259 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2190049' 00:06:18.259 killing process with pid 2190049 00:06:18.259 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2190049 00:06:18.259 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2190049 00:06:18.518 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:18.518 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:18.518 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:18.518 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:18.518 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:06:18.518 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:18.518 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:06:18.518 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:18.518 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:18.518 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.518 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:18.518 04:42:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.420 04:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:20.420 00:06:20.420 real 0m8.266s 00:06:20.420 user 0m12.864s 00:06:20.420 sys 0m2.673s 00:06:20.420 04:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.420 04:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.420 ************************************ 00:06:20.420 END TEST nvmf_abort 00:06:20.420 ************************************ 00:06:20.420 04:42:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:20.421 04:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:20.421 04:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.421 04:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:20.421 ************************************ 00:06:20.421 START TEST nvmf_ns_hotplug_stress 00:06:20.421 ************************************ 00:06:20.421 04:42:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:20.681 * Looking for test storage... 00:06:20.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # lcov --version 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:20.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.681 --rc genhtml_branch_coverage=1 00:06:20.681 --rc genhtml_function_coverage=1 00:06:20.681 --rc genhtml_legend=1 00:06:20.681 --rc geninfo_all_blocks=1 00:06:20.681 --rc geninfo_unexecuted_blocks=1 00:06:20.681 00:06:20.681 ' 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:20.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.681 --rc genhtml_branch_coverage=1 00:06:20.681 --rc genhtml_function_coverage=1 00:06:20.681 --rc genhtml_legend=1 00:06:20.681 --rc geninfo_all_blocks=1 00:06:20.681 --rc geninfo_unexecuted_blocks=1 00:06:20.681 00:06:20.681 ' 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:20.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.681 --rc genhtml_branch_coverage=1 00:06:20.681 --rc genhtml_function_coverage=1 00:06:20.681 --rc genhtml_legend=1 00:06:20.681 --rc geninfo_all_blocks=1 00:06:20.681 --rc geninfo_unexecuted_blocks=1 00:06:20.681 00:06:20.681 ' 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:20.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.681 --rc genhtml_branch_coverage=1 00:06:20.681 --rc genhtml_function_coverage=1 00:06:20.681 --rc genhtml_legend=1 00:06:20.681 --rc geninfo_all_blocks=1 00:06:20.681 --rc geninfo_unexecuted_blocks=1 00:06:20.681 00:06:20.681 ' 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.681 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:20.682 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:23.215 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.215 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:23.216 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:23.216 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:23.216 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:23.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:23.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:06:23.216 00:06:23.216 --- 10.0.0.2 ping statistics --- 00:06:23.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.216 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:23.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:23.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:06:23.216 00:06:23.216 --- 10.0.0.1 ping statistics --- 00:06:23.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.216 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=2192779 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 2192779 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2192779 ']' 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:23.216 [2024-10-28 04:42:13.434357] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:06:23.216 [2024-10-28 04:42:13.434452] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.216 [2024-10-28 04:42:13.579680] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:23.216 [2024-10-28 04:42:13.621807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.216 [2024-10-28 04:42:13.672882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.216 [2024-10-28 04:42:13.672984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.216 [2024-10-28 04:42:13.673002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.216 [2024-10-28 04:42:13.673015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.216 [2024-10-28 04:42:13.673027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.216 [2024-10-28 04:42:13.674669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.216 [2024-10-28 04:42:13.674703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.216 [2024-10-28 04:42:13.674707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.216 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:23.532 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.532 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:23.532 04:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:23.864 [2024-10-28 04:42:14.102155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.864 04:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:23.864 04:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:24.158 [2024-10-28 04:42:14.647662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:24.158 04:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:24.415 04:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:24.673 Malloc0 00:06:24.673 04:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:24.931 Delay0 00:06:24.931 04:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.188 04:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:25.447 NULL1 00:06:25.447 04:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:26.013 04:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2193323 00:06:26.013 04:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:26.013 04:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:26.013 04:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.946 Read completed with error (sct=0, sc=11) 00:06:26.946 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.461 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:27.461 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:27.719 true 00:06:27.719 04:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:27.719 04:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.284 04:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.541 04:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:28.541 04:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:28.798 true 00:06:28.798 04:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:28.798 04:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.055 04:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.619 04:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:29.619 04:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:29.619 true 00:06:29.619 04:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:29.619 04:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.877 04:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.442 04:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:30.442 04:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:30.442 true 00:06:30.700 04:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:30.700 04:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.632 04:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.632 04:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:31.632 04:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:31.889 true 00:06:31.889 04:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:31.889 04:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.147 04:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.711 04:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:32.711 04:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:32.711 true 00:06:32.711 04:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:32.711 04:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.969 04:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.226 04:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:33.226 04:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:33.791 true 00:06:33.791 04:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:33.791 04:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.359 04:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.616 04:42:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:34.616 04:42:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:34.873 true 00:06:34.873 04:42:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:34.873 04:42:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.438 04:42:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.438 04:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:35.438 04:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:35.695 true 00:06:35.952 04:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:35.952 04:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.516 04:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.772 04:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:36.772 04:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:37.029 true 00:06:37.029 04:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:37.029 04:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.286 04:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.849 04:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:37.849 04:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:37.849 true 00:06:37.849 04:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:37.850 04:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.414 04:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.414 04:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:38.414 04:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:38.672 true 00:06:38.672 04:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:38.672 04:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.044 04:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.044 04:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:40.044 04:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:40.302 true 00:06:40.302 04:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:40.302 04:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.560 04:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.818 04:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:40.818 04:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:41.075 true 00:06:41.075 04:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:41.075 04:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.333 04:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.899 04:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:41.899 04:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:41.899 true 00:06:41.899 04:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:41.899 04:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.831 04:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.088 04:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:43.088 04:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:43.346 true 00:06:43.346 04:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:43.346 04:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.603 04:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.860 04:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:43.860 04:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:44.118 true 00:06:44.376 04:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:44.376 04:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.633 04:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.891 04:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:44.891 04:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:45.149 true 00:06:45.149 04:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:45.149 04:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.082 04:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.340 04:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:46.340 04:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:46.597 true 00:06:46.597 04:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:46.597 04:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.855 04:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.113 04:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:47.113 04:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:47.370 true 00:06:47.370 04:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:47.370 04:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.628 04:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.886 04:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:47.886 04:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:48.144 true 00:06:48.144 04:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:48.144 04:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.077 04:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.333 04:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:49.333 04:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:49.590 true 00:06:49.590 04:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:49.590 04:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.847 04:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.412 04:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:50.412 04:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:50.412 true 00:06:50.412 04:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:50.412 04:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.975 04:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.975 04:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:50.975 04:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:51.232 true 00:06:51.490 04:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:51.490 04:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.420 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.420 04:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.420 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.420 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.420 04:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:52.420 04:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:52.678 true 00:06:52.678 04:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:52.678 04:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.275 04:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.275 04:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:53.275 04:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:53.558 true 00:06:53.558 04:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:53.558 04:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.815 04:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.380 04:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:54.380 04:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:54.380 true 00:06:54.380 04:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:54.380 04:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.312 04:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.569 04:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:55.569 04:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:55.826 true 00:06:55.826 04:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:55.826 04:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.084 Initializing NVMe Controllers 00:06:56.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:56.084 Controller IO queue size 128, less than required. 00:06:56.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:56.084 Controller IO queue size 128, less than required. 00:06:56.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:56.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:56.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:56.084 Initialization complete. Launching workers. 00:06:56.084 ======================================================== 00:06:56.084 Latency(us) 00:06:56.084 Device Information : IOPS MiB/s Average min max 00:06:56.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 595.75 0.29 88617.09 2751.18 1013124.56 00:06:56.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8656.08 4.23 14788.77 3956.08 450202.24 00:06:56.084 ======================================================== 00:06:56.084 Total : 9251.83 4.52 19542.74 2751.18 1013124.56 00:06:56.084 00:06:56.084 04:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.341 04:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:56.341 04:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:56.598 true 00:06:56.598 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2193323 00:06:56.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2193323) - No such process 00:06:56.598 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2193323 00:06:56.598 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.162 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.162 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:57.162 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:57.162 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:57.162 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:57.162 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:57.420 null0 00:06:57.420 04:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:57.420 04:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:57.420 04:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:57.677 null1 00:06:57.941 04:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:57.941 04:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:57.941 04:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:58.200 null2 00:06:58.200 04:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.200 04:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.200 04:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:58.458 null3 00:06:58.458 04:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.458 04:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.458 04:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:58.715 null4 00:06:58.715 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.715 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.715 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:58.973 null5 00:06:58.973 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.973 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.973 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:59.230 null6 00:06:59.230 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:59.230 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:59.230 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:59.489 null7 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2197294 2197295 2197297 2197299 2197301 2197303 2197305 2197307 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.489 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.747 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.747 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.747 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.747 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.747 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.747 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.747 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.747 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.006 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.264 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.264 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.264 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.264 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.264 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:00.521 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.521 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.521 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.779 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:01.037 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:01.037 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:01.037 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.037 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:01.037 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:01.037 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:01.037 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:01.037 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:01.295 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.295 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.295 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:01.295 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.295 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.295 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:01.295 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.295 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.295 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:01.295 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.295 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.296 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:01.296 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.296 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.296 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:01.296 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.296 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.296 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:01.296 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.296 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.296 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:01.296 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.296 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.296 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:01.554 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:01.554 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:01.554 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.555 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:01.555 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:01.555 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:01.555 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:01.555 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:01.813 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.813 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.813 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:01.813 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.813 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.813 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:01.813 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.813 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.813 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:01.813 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.814 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.814 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:01.814 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.814 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.814 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.814 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.814 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:01.814 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:01.814 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.814 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.814 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:01.814 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.814 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.814 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:02.072 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:02.072 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.072 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.072 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:02.072 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:02.072 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.072 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:02.072 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.638 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:02.896 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.896 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:02.896 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:02.896 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.896 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:02.896 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.896 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:02.896 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.155 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:03.413 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:03.413 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:03.413 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:03.413 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:03.413 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:03.413 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:03.413 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:03.413 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.672 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:03.930 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:03.930 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:03.930 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:03.930 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:03.930 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:03.930 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.930 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:03.930 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.188 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:04.446 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.446 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.446 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:04.704 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:04.704 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:04.704 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.704 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:04.704 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:04.704 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:04.704 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:04.704 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.962 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:05.220 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:05.220 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:05.220 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.220 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.220 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:05.220 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:05.220 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:05.220 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:05.479 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.479 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.479 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.479 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.479 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.479 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.479 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.479 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.479 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.479 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.479 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.479 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.479 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.479 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:05.479 rmmod nvme_tcp 00:07:05.479 rmmod nvme_fabrics 00:07:05.479 rmmod nvme_keyring 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 2192779 ']' 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 2192779 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2192779 ']' 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2192779 00:07:05.479 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:05.738 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.738 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2192779 00:07:05.738 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:05.738 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:05.738 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2192779' 00:07:05.738 killing process with pid 2192779 00:07:05.738 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2192779 00:07:05.738 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2192779 00:07:05.738 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:05.738 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:05.738 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:05.738 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:05.738 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:07:05.738 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:05.738 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:07:05.997 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:05.997 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:05.997 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.997 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.997 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.900 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:07.900 00:07:07.900 real 0m47.406s 00:07:07.900 user 3m40.677s 00:07:07.900 sys 0m15.672s 00:07:07.900 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.900 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:07.900 ************************************ 00:07:07.900 END TEST nvmf_ns_hotplug_stress 00:07:07.900 ************************************ 00:07:07.900 04:42:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:07.900 04:42:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:07.900 04:42:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.900 04:42:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.900 ************************************ 00:07:07.900 START TEST nvmf_delete_subsystem 00:07:07.900 ************************************ 00:07:07.900 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:07.900 * Looking for test storage... 00:07:07.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.900 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:07.900 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # lcov --version 00:07:07.900 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:08.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.159 --rc genhtml_branch_coverage=1 00:07:08.159 --rc genhtml_function_coverage=1 00:07:08.159 --rc genhtml_legend=1 00:07:08.159 --rc geninfo_all_blocks=1 00:07:08.159 --rc geninfo_unexecuted_blocks=1 00:07:08.159 00:07:08.159 ' 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:08.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.159 --rc genhtml_branch_coverage=1 00:07:08.159 --rc genhtml_function_coverage=1 00:07:08.159 --rc genhtml_legend=1 00:07:08.159 --rc geninfo_all_blocks=1 00:07:08.159 --rc geninfo_unexecuted_blocks=1 00:07:08.159 00:07:08.159 ' 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:08.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.159 --rc genhtml_branch_coverage=1 00:07:08.159 --rc genhtml_function_coverage=1 00:07:08.159 --rc genhtml_legend=1 00:07:08.159 --rc geninfo_all_blocks=1 00:07:08.159 --rc geninfo_unexecuted_blocks=1 00:07:08.159 00:07:08.159 ' 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:08.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.159 --rc genhtml_branch_coverage=1 00:07:08.159 --rc genhtml_function_coverage=1 00:07:08.159 --rc genhtml_legend=1 00:07:08.159 --rc geninfo_all_blocks=1 00:07:08.159 --rc geninfo_unexecuted_blocks=1 00:07:08.159 00:07:08.159 ' 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.159 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:08.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:08.160 04:42:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:10.064 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:10.064 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.064 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:10.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:10.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.065 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:10.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:07:10.324 00:07:10.324 --- 10.0.0.2 ping statistics --- 00:07:10.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.324 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:07:10.324 00:07:10.324 --- 10.0.0.1 ping statistics --- 00:07:10.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.324 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.324 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=2200174 00:07:10.325 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:10.325 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 2200174 00:07:10.325 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2200174 ']' 00:07:10.325 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.325 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.325 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.325 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.325 04:43:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.325 [2024-10-28 04:43:00.799928] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:07:10.325 [2024-10-28 04:43:00.800027] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.583 [2024-10-28 04:43:00.938958] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:10.583 [2024-10-28 04:43:00.980976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.583 [2024-10-28 04:43:01.028352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.583 [2024-10-28 04:43:01.028426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.583 [2024-10-28 04:43:01.028442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.583 [2024-10-28 04:43:01.028456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.583 [2024-10-28 04:43:01.028467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.583 [2024-10-28 04:43:01.029918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.583 [2024-10-28 04:43:01.029925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.518 [2024-10-28 04:43:01.821675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.518 [2024-10-28 04:43:01.837829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.518 NULL1 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.518 Delay0 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2200324 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:11.518 04:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:11.518 [2024-10-28 04:43:02.022601] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:13.416 04:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.416 04:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.416 04:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 [2024-10-28 04:43:04.098252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4b20 is same with the state(6) to be set 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 [2024-10-28 04:43:04.099293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2ad0 is same with the state(6) to be set 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 Write completed with error (sct=0, sc=8) 00:07:13.674 starting I/O failed: -6 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.674 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 starting I/O failed: -6 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 starting I/O failed: -6 00:07:13.675 Write completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 starting I/O failed: -6 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Write completed with error (sct=0, sc=8) 00:07:13.675 [2024-10-28 04:43:04.099770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f938000d470 is same with the state(6) to be set 00:07:13.675 Write completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Write completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Write completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Write completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Write completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Write completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Write completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Write completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:13.675 Read completed with error (sct=0, sc=8) 00:07:14.607 [2024-10-28 04:43:05.078373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b0da0 is same with the state(6) to be set 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 [2024-10-28 04:43:05.098015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f938000cfe0 is same with the state(6) to be set 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 [2024-10-28 04:43:05.098250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f938000d7a0 is same with the state(6) to be set 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 [2024-10-28 04:43:05.099717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b47f0 is same with the state(6) to be set 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 Write completed with error (sct=0, sc=8) 00:07:14.607 Read completed with error (sct=0, sc=8) 00:07:14.607 [2024-10-28 04:43:05.100198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b28f0 is same with the state(6) to be set 00:07:14.607 Initializing NVMe Controllers 00:07:14.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:14.607 Controller IO queue size 128, less than required. 00:07:14.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:14.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:14.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:14.607 Initialization complete. Launching workers. 00:07:14.607 ======================================================== 00:07:14.607 Latency(us) 00:07:14.607 Device Information : IOPS MiB/s Average min max 00:07:14.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.40 0.08 911327.91 1066.87 1011665.16 00:07:14.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.92 0.08 953136.45 355.12 2001882.88 00:07:14.607 ======================================================== 00:07:14.607 Total : 324.32 0.16 932072.12 355.12 2001882.88 00:07:14.607 00:07:14.607 [2024-10-28 04:43:05.100624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b0da0 (9): Bad file descriptor 00:07:14.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:14.607 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.607 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:14.607 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2200324 00:07:14.607 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2200324 00:07:15.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2200324) - No such process 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2200324 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2200324 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2200324 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.173 [2024-10-28 04:43:05.623821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2200725 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200725 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:15.173 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:15.431 [2024-10-28 04:43:05.795087] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:15.688 04:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:15.688 04:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200725 00:07:15.688 04:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:16.252 04:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:16.252 04:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200725 00:07:16.252 04:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:16.817 04:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:16.817 04:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200725 00:07:16.817 04:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:17.074 04:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:17.074 04:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200725 00:07:17.074 04:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:17.639 04:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:17.639 04:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200725 00:07:17.639 04:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:18.204 04:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:18.204 04:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200725 00:07:18.204 04:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:18.462 Initializing NVMe Controllers 00:07:18.462 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:18.462 Controller IO queue size 128, less than required. 00:07:18.462 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:18.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:18.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:18.462 Initialization complete. Launching workers. 00:07:18.462 ======================================================== 00:07:18.462 Latency(us) 00:07:18.462 Device Information : IOPS MiB/s Average min max 00:07:18.462 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003512.29 1000092.96 1012860.27 00:07:18.462 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005030.82 1000002.76 1012934.80 00:07:18.462 ======================================================== 00:07:18.462 Total : 256.00 0.12 1004271.56 1000002.76 1012934.80 00:07:18.462 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200725 00:07:18.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2200725) - No such process 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2200725 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:18.720 rmmod nvme_tcp 00:07:18.720 rmmod nvme_fabrics 00:07:18.720 rmmod nvme_keyring 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 2200174 ']' 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 2200174 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2200174 ']' 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2200174 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2200174 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2200174' 00:07:18.720 killing process with pid 2200174 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2200174 00:07:18.720 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2200174 00:07:18.979 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:18.979 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:18.979 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:18.979 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:18.979 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:07:18.979 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:18.979 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:07:18.979 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:18.979 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:18.979 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.979 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.979 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:21.516 00:07:21.516 real 0m13.076s 00:07:21.516 user 0m29.323s 00:07:21.516 sys 0m2.885s 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.516 ************************************ 00:07:21.516 END TEST nvmf_delete_subsystem 00:07:21.516 ************************************ 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:21.516 ************************************ 00:07:21.516 START TEST nvmf_host_management 00:07:21.516 ************************************ 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:21.516 * Looking for test storage... 00:07:21.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1689 -- # lcov --version 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.516 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:21.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.517 --rc genhtml_branch_coverage=1 00:07:21.517 --rc genhtml_function_coverage=1 00:07:21.517 --rc genhtml_legend=1 00:07:21.517 --rc geninfo_all_blocks=1 00:07:21.517 --rc geninfo_unexecuted_blocks=1 00:07:21.517 00:07:21.517 ' 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:21.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.517 --rc genhtml_branch_coverage=1 00:07:21.517 --rc genhtml_function_coverage=1 00:07:21.517 --rc genhtml_legend=1 00:07:21.517 --rc geninfo_all_blocks=1 00:07:21.517 --rc geninfo_unexecuted_blocks=1 00:07:21.517 00:07:21.517 ' 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:21.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.517 --rc genhtml_branch_coverage=1 00:07:21.517 --rc genhtml_function_coverage=1 00:07:21.517 --rc genhtml_legend=1 00:07:21.517 --rc geninfo_all_blocks=1 00:07:21.517 --rc geninfo_unexecuted_blocks=1 00:07:21.517 00:07:21.517 ' 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:21.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.517 --rc genhtml_branch_coverage=1 00:07:21.517 --rc genhtml_function_coverage=1 00:07:21.517 --rc genhtml_legend=1 00:07:21.517 --rc geninfo_all_blocks=1 00:07:21.517 --rc geninfo_unexecuted_blocks=1 00:07:21.517 00:07:21.517 ' 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:21.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:21.517 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:23.420 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:23.420 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:23.420 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:23.420 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:23.421 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:23.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:07:23.421 00:07:23.421 --- 10.0.0.2 ping statistics --- 00:07:23.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.421 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:07:23.421 00:07:23.421 --- 10.0.0.1 ping statistics --- 00:07:23.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.421 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=2203055 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 2203055 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2203055 ']' 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.421 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.679 [2024-10-28 04:43:14.047053] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:07:23.679 [2024-10-28 04:43:14.047139] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.679 [2024-10-28 04:43:14.186916] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.679 [2024-10-28 04:43:14.225309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.937 [2024-10-28 04:43:14.275509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.937 [2024-10-28 04:43:14.275561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.937 [2024-10-28 04:43:14.275575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.937 [2024-10-28 04:43:14.275587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.937 [2024-10-28 04:43:14.275598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.937 [2024-10-28 04:43:14.277226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.937 [2024-10-28 04:43:14.277559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.937 [2024-10-28 04:43:14.277617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:23.937 [2024-10-28 04:43:14.277620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.565 [2024-10-28 04:43:15.071338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.565 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.565 Malloc0 00:07:24.565 [2024-10-28 04:43:15.151191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2203232 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2203232 /var/tmp/bdevperf.sock 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2203232 ']' 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:24.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:24.823 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:24.823 { 00:07:24.823 "params": { 00:07:24.823 "name": "Nvme$subsystem", 00:07:24.823 "trtype": "$TEST_TRANSPORT", 00:07:24.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:24.823 "adrfam": "ipv4", 00:07:24.823 "trsvcid": "$NVMF_PORT", 00:07:24.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:24.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:24.823 "hdgst": ${hdgst:-false}, 00:07:24.823 "ddgst": ${ddgst:-false} 00:07:24.823 }, 00:07:24.823 "method": "bdev_nvme_attach_controller" 00:07:24.824 } 00:07:24.824 EOF 00:07:24.824 )") 00:07:24.824 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:24.824 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:24.824 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:24.824 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:24.824 "params": { 00:07:24.824 "name": "Nvme0", 00:07:24.824 "trtype": "tcp", 00:07:24.824 "traddr": "10.0.0.2", 00:07:24.824 "adrfam": "ipv4", 00:07:24.824 "trsvcid": "4420", 00:07:24.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:24.824 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:24.824 "hdgst": false, 00:07:24.824 "ddgst": false 00:07:24.824 }, 00:07:24.824 "method": "bdev_nvme_attach_controller" 00:07:24.824 }' 00:07:24.824 [2024-10-28 04:43:15.235402] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:07:24.824 [2024-10-28 04:43:15.235479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2203232 ] 00:07:24.824 [2024-10-28 04:43:15.367857] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.824 [2024-10-28 04:43:15.406447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.082 [2024-10-28 04:43:15.454022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.082 Running I/O for 10 seconds... 00:07:25.647 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.647 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:25.647 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:25.648 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.906 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.906 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.906 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:25.906 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.907 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.907 [2024-10-28 04:43:16.311547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.311618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.311655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.311674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.311700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.311715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.311730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.311744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.311761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.311775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.311790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.311804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.311820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.311834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.311858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.311873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.311889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.311903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.311919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.311933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.311949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.311962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.311978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.311991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.907 [2024-10-28 04:43:16.312632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.907 [2024-10-28 04:43:16.312653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.312669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.312684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.312707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.312720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.908 [2024-10-28 04:43:16.312735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.312749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.312764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.312778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.312793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.312806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.312821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.312836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.312851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.312866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.312881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:25.908 [2024-10-28 04:43:16.312895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.312910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.312924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.312939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.312962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.312977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.312991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.908 [2024-10-28 04:43:16.313025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.908 [2024-10-28 04:43:16.313142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.908 [2024-10-28 04:43:16.313550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.908 [2024-10-28 04:43:16.313564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2265960 is same with the state(6) to be set 00:07:25.908 [2024-10-28 04:43:16.314833] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:25.908 task offset: 8064 on job bdev=Nvme0n1 fails 00:07:25.908 00:07:25.908 Latency(us) 00:07:25.908 [2024-10-28T03:43:16.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.908 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:25.908 Job: Nvme0n1 ended in about 0.68 seconds with error 00:07:25.908 Verification LBA range: start 0x0 length 0x400 00:07:25.908 Nvme0n1 : 0.68 1511.70 94.48 94.48 0.00 39072.79 7007.38 33285.04 00:07:25.908 [2024-10-28T03:43:16.504Z] =================================================================================================================== 00:07:25.908 [2024-10-28T03:43:16.504Z] Total : 1511.70 94.48 94.48 0.00 39072.79 7007.38 33285.04 00:07:25.908 [2024-10-28 04:43:16.316846] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.908 [2024-10-28 04:43:16.316875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204c580 (9): Bad file descriptor 00:07:25.908 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.908 04:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:25.908 [2024-10-28 04:43:16.365246] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:26.842 04:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2203232 00:07:26.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2203232) - No such process 00:07:26.842 04:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:26.842 04:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:26.842 04:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:26.842 04:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:26.842 04:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:26.842 04:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:26.842 04:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:26.842 04:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:26.842 { 00:07:26.842 "params": { 00:07:26.842 "name": "Nvme$subsystem", 00:07:26.842 "trtype": "$TEST_TRANSPORT", 00:07:26.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:26.842 "adrfam": "ipv4", 00:07:26.842 "trsvcid": "$NVMF_PORT", 00:07:26.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:26.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:26.842 "hdgst": ${hdgst:-false}, 00:07:26.842 "ddgst": ${ddgst:-false} 00:07:26.842 }, 00:07:26.842 "method": "bdev_nvme_attach_controller" 00:07:26.842 } 00:07:26.842 EOF 00:07:26.842 )") 00:07:26.842 04:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:26.842 04:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:26.842 04:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:26.842 04:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:26.842 "params": { 00:07:26.842 "name": "Nvme0", 00:07:26.842 "trtype": "tcp", 00:07:26.842 "traddr": "10.0.0.2", 00:07:26.842 "adrfam": "ipv4", 00:07:26.842 "trsvcid": "4420", 00:07:26.842 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:26.842 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:26.842 "hdgst": false, 00:07:26.842 "ddgst": false 00:07:26.842 }, 00:07:26.842 "method": "bdev_nvme_attach_controller" 00:07:26.842 }' 00:07:26.842 [2024-10-28 04:43:17.372170] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:07:26.842 [2024-10-28 04:43:17.372248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2203507 ] 00:07:27.101 [2024-10-28 04:43:17.503851] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.101 [2024-10-28 04:43:17.541826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.101 [2024-10-28 04:43:17.590266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.359 Running I/O for 1 seconds... 00:07:28.734 1533.00 IOPS, 95.81 MiB/s 00:07:28.734 Latency(us) 00:07:28.734 [2024-10-28T03:43:19.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.734 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:28.734 Verification LBA range: start 0x0 length 0x400 00:07:28.734 Nvme0n1 : 1.04 1541.69 96.36 0.00 0.00 40865.12 8905.21 34063.63 00:07:28.734 [2024-10-28T03:43:19.330Z] =================================================================================================================== 00:07:28.734 [2024-10-28T03:43:19.330Z] Total : 1541.69 96.36 0.00 0.00 40865.12 8905.21 34063.63 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:28.734 rmmod nvme_tcp 00:07:28.734 rmmod nvme_fabrics 00:07:28.734 rmmod nvme_keyring 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 2203055 ']' 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 2203055 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2203055 ']' 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2203055 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2203055 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2203055' 00:07:28.734 killing process with pid 2203055 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2203055 00:07:28.734 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2203055 00:07:28.993 [2024-10-28 04:43:19.419389] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:28.993 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:28.993 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:28.993 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:28.993 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:28.993 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:07:28.993 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:28.993 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:07:28.993 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:28.993 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:28.993 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.993 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.993 04:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:31.531 00:07:31.531 real 0m9.956s 00:07:31.531 user 0m24.098s 00:07:31.531 sys 0m2.876s 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.531 ************************************ 00:07:31.531 END TEST nvmf_host_management 00:07:31.531 ************************************ 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:31.531 ************************************ 00:07:31.531 START TEST nvmf_lvol 00:07:31.531 ************************************ 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:31.531 * Looking for test storage... 00:07:31.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1689 -- # lcov --version 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:31.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.531 --rc genhtml_branch_coverage=1 00:07:31.531 --rc genhtml_function_coverage=1 00:07:31.531 --rc genhtml_legend=1 00:07:31.531 --rc geninfo_all_blocks=1 00:07:31.531 --rc geninfo_unexecuted_blocks=1 00:07:31.531 00:07:31.531 ' 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:31.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.531 --rc genhtml_branch_coverage=1 00:07:31.531 --rc genhtml_function_coverage=1 00:07:31.531 --rc genhtml_legend=1 00:07:31.531 --rc geninfo_all_blocks=1 00:07:31.531 --rc geninfo_unexecuted_blocks=1 00:07:31.531 00:07:31.531 ' 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:31.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.531 --rc genhtml_branch_coverage=1 00:07:31.531 --rc genhtml_function_coverage=1 00:07:31.531 --rc genhtml_legend=1 00:07:31.531 --rc geninfo_all_blocks=1 00:07:31.531 --rc geninfo_unexecuted_blocks=1 00:07:31.531 00:07:31.531 ' 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:31.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.531 --rc genhtml_branch_coverage=1 00:07:31.531 --rc genhtml_function_coverage=1 00:07:31.531 --rc genhtml_legend=1 00:07:31.531 --rc geninfo_all_blocks=1 00:07:31.531 --rc geninfo_unexecuted_blocks=1 00:07:31.531 00:07:31.531 ' 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.531 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:31.532 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:33.439 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:33.439 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:33.440 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:33.440 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:33.440 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:33.440 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:33.440 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:33.440 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:33.440 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:33.440 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:33.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:07:33.699 00:07:33.699 --- 10.0.0.2 ping statistics --- 00:07:33.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.699 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:33.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:07:33.699 00:07:33.699 --- 10.0.0.1 ping statistics --- 00:07:33.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.699 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=2205704 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 2205704 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2205704 ']' 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.699 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:33.699 [2024-10-28 04:43:24.164321] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:07:33.699 [2024-10-28 04:43:24.164418] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.957 [2024-10-28 04:43:24.302631] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:33.957 [2024-10-28 04:43:24.337403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.957 [2024-10-28 04:43:24.386181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.957 [2024-10-28 04:43:24.386241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.957 [2024-10-28 04:43:24.386257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.957 [2024-10-28 04:43:24.386270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.957 [2024-10-28 04:43:24.386288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.957 [2024-10-28 04:43:24.387922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.957 [2024-10-28 04:43:24.387954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.957 [2024-10-28 04:43:24.387957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.890 04:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.890 04:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:34.890 04:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:34.890 04:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:34.890 04:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:34.890 04:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.890 04:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:34.890 [2024-10-28 04:43:25.411643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.890 04:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:35.148 04:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:35.148 04:43:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:35.713 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:35.713 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:35.713 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:36.279 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2a54043c-0c98-4d71-a9db-4eab1e9be4fa 00:07:36.279 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2a54043c-0c98-4d71-a9db-4eab1e9be4fa lvol 20 00:07:36.279 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=01f9a0e5-5f42-4577-94cd-7c006240e981 00:07:36.279 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:36.537 04:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 01f9a0e5-5f42-4577-94cd-7c006240e981 00:07:36.795 04:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:37.053 [2024-10-28 04:43:27.629941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.310 04:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:37.568 04:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2206248 00:07:37.568 04:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:37.568 04:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:38.502 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 01f9a0e5-5f42-4577-94cd-7c006240e981 MY_SNAPSHOT 00:07:38.759 04:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e38e0255-ce45-4bac-8cc9-8e1ad7058a46 00:07:38.759 04:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 01f9a0e5-5f42-4577-94cd-7c006240e981 30 00:07:39.017 04:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e38e0255-ce45-4bac-8cc9-8e1ad7058a46 MY_CLONE 00:07:39.275 04:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bd8d4c6f-dc66-4790-8ec6-ef723d09c858 00:07:39.275 04:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bd8d4c6f-dc66-4790-8ec6-ef723d09c858 00:07:40.209 04:43:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2206248 00:07:48.316 Initializing NVMe Controllers 00:07:48.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:48.316 Controller IO queue size 128, less than required. 00:07:48.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:48.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:48.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:48.316 Initialization complete. Launching workers. 00:07:48.316 ======================================================== 00:07:48.316 Latency(us) 00:07:48.316 Device Information : IOPS MiB/s Average min max 00:07:48.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10319.90 40.31 12410.99 1545.11 137616.19 00:07:48.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10286.90 40.18 12446.34 2266.82 72645.40 00:07:48.316 ======================================================== 00:07:48.316 Total : 20606.80 80.50 12428.63 1545.11 137616.19 00:07:48.316 00:07:48.316 04:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:48.316 04:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 01f9a0e5-5f42-4577-94cd-7c006240e981 00:07:48.574 04:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2a54043c-0c98-4d71-a9db-4eab1e9be4fa 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:48.832 rmmod nvme_tcp 00:07:48.832 rmmod nvme_fabrics 00:07:48.832 rmmod nvme_keyring 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 2205704 ']' 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 2205704 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2205704 ']' 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2205704 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2205704 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2205704' 00:07:48.832 killing process with pid 2205704 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2205704 00:07:48.832 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2205704 00:07:49.092 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:49.092 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:49.092 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:49.092 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:49.092 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:07:49.092 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:49.092 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:07:49.092 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:49.092 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:49.092 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.092 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.092 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:51.624 00:07:51.624 real 0m20.093s 00:07:51.624 user 1m7.702s 00:07:51.624 sys 0m5.413s 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:51.624 ************************************ 00:07:51.624 END TEST nvmf_lvol 00:07:51.624 ************************************ 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.624 ************************************ 00:07:51.624 START TEST nvmf_lvs_grow 00:07:51.624 ************************************ 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:51.624 * Looking for test storage... 00:07:51.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # lcov --version 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:51.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.624 --rc genhtml_branch_coverage=1 00:07:51.624 --rc genhtml_function_coverage=1 00:07:51.624 --rc genhtml_legend=1 00:07:51.624 --rc geninfo_all_blocks=1 00:07:51.624 --rc geninfo_unexecuted_blocks=1 00:07:51.624 00:07:51.624 ' 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:51.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.624 --rc genhtml_branch_coverage=1 00:07:51.624 --rc genhtml_function_coverage=1 00:07:51.624 --rc genhtml_legend=1 00:07:51.624 --rc geninfo_all_blocks=1 00:07:51.624 --rc geninfo_unexecuted_blocks=1 00:07:51.624 00:07:51.624 ' 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:51.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.624 --rc genhtml_branch_coverage=1 00:07:51.624 --rc genhtml_function_coverage=1 00:07:51.624 --rc genhtml_legend=1 00:07:51.624 --rc geninfo_all_blocks=1 00:07:51.624 --rc geninfo_unexecuted_blocks=1 00:07:51.624 00:07:51.624 ' 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:51.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.624 --rc genhtml_branch_coverage=1 00:07:51.624 --rc genhtml_function_coverage=1 00:07:51.624 --rc genhtml_legend=1 00:07:51.624 --rc geninfo_all_blocks=1 00:07:51.624 --rc geninfo_unexecuted_blocks=1 00:07:51.624 00:07:51.624 ' 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.624 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:51.625 04:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:53.527 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:53.527 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:53.527 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:53.528 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:53.528 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:53.528 04:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:53.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:07:53.528 00:07:53.528 --- 10.0.0.2 ping statistics --- 00:07:53.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.528 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:07:53.528 00:07:53.528 --- 10.0.0.1 ping statistics --- 00:07:53.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.528 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=2209481 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 2209481 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2209481 ']' 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.528 04:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.787 [2024-10-28 04:43:44.129494] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:07:53.787 [2024-10-28 04:43:44.129584] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.787 [2024-10-28 04:43:44.271295] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:53.787 [2024-10-28 04:43:44.313414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.787 [2024-10-28 04:43:44.361510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.787 [2024-10-28 04:43:44.361577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.787 [2024-10-28 04:43:44.361593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.787 [2024-10-28 04:43:44.361606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.787 [2024-10-28 04:43:44.361617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.787 [2024-10-28 04:43:44.362299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.721 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.721 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:54.721 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:54.721 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:54.721 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.721 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.721 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:54.979 [2024-10-28 04:43:45.398773] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.979 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:54.979 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.979 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.979 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.979 ************************************ 00:07:54.979 START TEST lvs_grow_clean 00:07:54.979 ************************************ 00:07:54.979 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:54.979 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:54.979 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:54.980 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:54.980 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:54.980 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:54.980 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:54.980 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:54.980 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:54.980 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:55.238 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:55.238 04:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:55.496 04:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cece87ef-1f1e-4805-a234-bc87ccd5584d 00:07:55.496 04:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cece87ef-1f1e-4805-a234-bc87ccd5584d 00:07:55.496 04:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:55.754 04:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:55.754 04:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:55.754 04:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cece87ef-1f1e-4805-a234-bc87ccd5584d lvol 150 00:07:56.012 04:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=cc9eb65c-9764-4b0f-a036-e071a4f3302c 00:07:56.012 04:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:56.012 04:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:56.270 [2024-10-28 04:43:46.821413] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:56.270 [2024-10-28 04:43:46.821507] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:56.270 true 00:07:56.270 04:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cece87ef-1f1e-4805-a234-bc87ccd5584d 00:07:56.270 04:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:56.528 04:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:56.528 04:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:57.095 04:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cc9eb65c-9764-4b0f-a036-e071a4f3302c 00:07:57.095 04:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:57.353 [2024-10-28 04:43:47.910114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.353 04:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.612 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2210020 00:07:57.612 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:57.612 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:57.612 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2210020 /var/tmp/bdevperf.sock 00:07:57.612 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2210020 ']' 00:07:57.612 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:57.612 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.612 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:57.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:57.612 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.612 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:57.870 [2024-10-28 04:43:48.244850] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:07:57.870 [2024-10-28 04:43:48.244947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2210020 ] 00:07:57.870 [2024-10-28 04:43:48.377147] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.870 [2024-10-28 04:43:48.418678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.128 [2024-10-28 04:43:48.465685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.694 04:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.694 04:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:58.694 04:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:59.260 Nvme0n1 00:07:59.260 04:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:59.518 [ 00:07:59.518 { 00:07:59.518 "name": "Nvme0n1", 00:07:59.518 "aliases": [ 00:07:59.518 "cc9eb65c-9764-4b0f-a036-e071a4f3302c" 00:07:59.518 ], 00:07:59.518 "product_name": "NVMe disk", 00:07:59.518 "block_size": 4096, 00:07:59.518 "num_blocks": 38912, 00:07:59.518 "uuid": "cc9eb65c-9764-4b0f-a036-e071a4f3302c", 00:07:59.518 "numa_id": 0, 00:07:59.518 "assigned_rate_limits": { 00:07:59.518 "rw_ios_per_sec": 0, 00:07:59.518 "rw_mbytes_per_sec": 0, 00:07:59.518 "r_mbytes_per_sec": 0, 00:07:59.518 "w_mbytes_per_sec": 0 00:07:59.518 }, 00:07:59.518 "claimed": false, 00:07:59.518 "zoned": false, 00:07:59.518 "supported_io_types": { 00:07:59.518 "read": true, 00:07:59.518 "write": true, 00:07:59.518 "unmap": true, 00:07:59.518 "flush": true, 00:07:59.518 "reset": true, 00:07:59.518 "nvme_admin": true, 00:07:59.518 "nvme_io": true, 00:07:59.518 "nvme_io_md": false, 00:07:59.518 "write_zeroes": true, 00:07:59.518 "zcopy": false, 00:07:59.518 "get_zone_info": false, 00:07:59.518 "zone_management": false, 00:07:59.518 "zone_append": false, 00:07:59.518 "compare": true, 00:07:59.518 "compare_and_write": true, 00:07:59.518 "abort": true, 00:07:59.518 "seek_hole": false, 00:07:59.518 "seek_data": false, 00:07:59.518 "copy": true, 00:07:59.518 "nvme_iov_md": false 00:07:59.518 }, 00:07:59.518 "memory_domains": [ 00:07:59.518 { 00:07:59.518 "dma_device_id": "system", 00:07:59.518 "dma_device_type": 1 00:07:59.518 } 00:07:59.518 ], 00:07:59.518 "driver_specific": { 00:07:59.518 "nvme": [ 00:07:59.518 { 00:07:59.518 "trid": { 00:07:59.518 "trtype": "TCP", 00:07:59.518 "adrfam": "IPv4", 00:07:59.518 "traddr": "10.0.0.2", 00:07:59.518 "trsvcid": "4420", 00:07:59.518 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:59.518 }, 00:07:59.518 "ctrlr_data": { 00:07:59.518 "cntlid": 1, 00:07:59.518 "vendor_id": "0x8086", 00:07:59.518 "model_number": "SPDK bdev Controller", 00:07:59.518 "serial_number": "SPDK0", 00:07:59.518 "firmware_revision": "25.01", 00:07:59.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:59.518 "oacs": { 00:07:59.518 "security": 0, 00:07:59.518 "format": 0, 00:07:59.518 "firmware": 0, 00:07:59.518 "ns_manage": 0 00:07:59.518 }, 00:07:59.518 "multi_ctrlr": true, 00:07:59.518 "ana_reporting": false 00:07:59.518 }, 00:07:59.518 "vs": { 00:07:59.518 "nvme_version": "1.3" 00:07:59.518 }, 00:07:59.518 "ns_data": { 00:07:59.518 "id": 1, 00:07:59.518 "can_share": true 00:07:59.518 } 00:07:59.518 } 00:07:59.518 ], 00:07:59.518 "mp_policy": "active_passive" 00:07:59.518 } 00:07:59.518 } 00:07:59.518 ] 00:07:59.518 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2210182 00:07:59.518 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:59.518 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:59.777 Running I/O for 10 seconds... 00:08:00.712 Latency(us) 00:08:00.712 [2024-10-28T03:43:51.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.712 Nvme0n1 : 1.00 13607.00 53.15 0.00 0.00 0.00 0.00 0.00 00:08:00.712 [2024-10-28T03:43:51.308Z] =================================================================================================================== 00:08:00.712 [2024-10-28T03:43:51.308Z] Total : 13607.00 53.15 0.00 0.00 0.00 0.00 0.00 00:08:00.712 00:08:01.646 04:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cece87ef-1f1e-4805-a234-bc87ccd5584d 00:08:01.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.646 Nvme0n1 : 2.00 13744.50 53.69 0.00 0.00 0.00 0.00 0.00 00:08:01.646 [2024-10-28T03:43:52.242Z] =================================================================================================================== 00:08:01.646 [2024-10-28T03:43:52.242Z] Total : 13744.50 53.69 0.00 0.00 0.00 0.00 0.00 00:08:01.646 00:08:01.904 true 00:08:01.904 04:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cece87ef-1f1e-4805-a234-bc87ccd5584d 00:08:01.904 04:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:02.196 04:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:02.196 04:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:02.196 04:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2210182 00:08:02.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.787 Nvme0n1 : 3.00 13809.67 53.94 0.00 0.00 0.00 0.00 0.00 00:08:02.787 [2024-10-28T03:43:53.383Z] =================================================================================================================== 00:08:02.787 [2024-10-28T03:43:53.383Z] Total : 13809.67 53.94 0.00 0.00 0.00 0.00 0.00 00:08:02.787 00:08:03.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.750 Nvme0n1 : 4.00 13889.25 54.25 0.00 0.00 0.00 0.00 0.00 00:08:03.750 [2024-10-28T03:43:54.346Z] =================================================================================================================== 00:08:03.750 [2024-10-28T03:43:54.346Z] Total : 13889.25 54.25 0.00 0.00 0.00 0.00 0.00 00:08:03.750 00:08:04.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.684 Nvme0n1 : 5.00 13955.20 54.51 0.00 0.00 0.00 0.00 0.00 00:08:04.684 [2024-10-28T03:43:55.280Z] =================================================================================================================== 00:08:04.684 [2024-10-28T03:43:55.280Z] Total : 13955.20 54.51 0.00 0.00 0.00 0.00 0.00 00:08:04.684 00:08:05.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.617 Nvme0n1 : 6.00 13974.00 54.59 0.00 0.00 0.00 0.00 0.00 00:08:05.617 [2024-10-28T03:43:56.213Z] =================================================================================================================== 00:08:05.617 [2024-10-28T03:43:56.213Z] Total : 13974.00 54.59 0.00 0.00 0.00 0.00 0.00 00:08:05.617 00:08:06.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.554 Nvme0n1 : 7.00 14046.00 54.87 0.00 0.00 0.00 0.00 0.00 00:08:06.554 [2024-10-28T03:43:57.150Z] =================================================================================================================== 00:08:06.554 [2024-10-28T03:43:57.150Z] Total : 14046.00 54.87 0.00 0.00 0.00 0.00 0.00 00:08:06.554 00:08:07.929 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.929 Nvme0n1 : 8.00 14052.38 54.89 0.00 0.00 0.00 0.00 0.00 00:08:07.929 [2024-10-28T03:43:58.525Z] =================================================================================================================== 00:08:07.929 [2024-10-28T03:43:58.525Z] Total : 14052.38 54.89 0.00 0.00 0.00 0.00 0.00 00:08:07.929 00:08:08.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.864 Nvme0n1 : 9.00 14074.11 54.98 0.00 0.00 0.00 0.00 0.00 00:08:08.864 [2024-10-28T03:43:59.460Z] =================================================================================================================== 00:08:08.864 [2024-10-28T03:43:59.460Z] Total : 14074.11 54.98 0.00 0.00 0.00 0.00 0.00 00:08:08.864 00:08:09.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.798 Nvme0n1 : 10.00 14080.00 55.00 0.00 0.00 0.00 0.00 0.00 00:08:09.798 [2024-10-28T03:44:00.394Z] =================================================================================================================== 00:08:09.798 [2024-10-28T03:44:00.394Z] Total : 14080.00 55.00 0.00 0.00 0.00 0.00 0.00 00:08:09.798 00:08:09.798 00:08:09.798 Latency(us) 00:08:09.798 [2024-10-28T03:44:00.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.798 Nvme0n1 : 10.01 14082.26 55.01 0.00 0.00 9085.00 4622.92 18005.06 00:08:09.798 [2024-10-28T03:44:00.394Z] =================================================================================================================== 00:08:09.798 [2024-10-28T03:44:00.394Z] Total : 14082.26 55.01 0.00 0.00 9085.00 4622.92 18005.06 00:08:09.798 { 00:08:09.798 "results": [ 00:08:09.798 { 00:08:09.798 "job": "Nvme0n1", 00:08:09.799 "core_mask": "0x2", 00:08:09.799 "workload": "randwrite", 00:08:09.799 "status": "finished", 00:08:09.799 "queue_depth": 128, 00:08:09.799 "io_size": 4096, 00:08:09.799 "runtime": 10.007488, 00:08:09.799 "iops": 14082.255207300774, 00:08:09.799 "mibps": 55.00880940351865, 00:08:09.799 "io_failed": 0, 00:08:09.799 "io_timeout": 0, 00:08:09.799 "avg_latency_us": 9084.996133760978, 00:08:09.799 "min_latency_us": 4622.921848895489, 00:08:09.799 "max_latency_us": 18005.064043066643 00:08:09.799 } 00:08:09.799 ], 00:08:09.799 "core_count": 1 00:08:09.799 } 00:08:09.799 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2210020 00:08:09.799 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2210020 ']' 00:08:09.799 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2210020 00:08:09.799 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:09.799 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.799 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2210020 00:08:09.799 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:09.799 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:09.799 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2210020' 00:08:09.799 killing process with pid 2210020 00:08:09.799 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2210020 00:08:09.799 Received shutdown signal, test time was about 10.000000 seconds 00:08:09.799 00:08:09.799 Latency(us) 00:08:09.799 [2024-10-28T03:44:00.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.799 [2024-10-28T03:44:00.395Z] =================================================================================================================== 00:08:09.799 [2024-10-28T03:44:00.395Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:09.799 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2210020 00:08:09.799 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:10.366 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:10.366 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cece87ef-1f1e-4805-a234-bc87ccd5584d 00:08:10.366 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:10.625 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:10.625 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:10.625 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:11.190 [2024-10-28 04:44:01.480247] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:11.190 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cece87ef-1f1e-4805-a234-bc87ccd5584d 00:08:11.190 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:11.190 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cece87ef-1f1e-4805-a234-bc87ccd5584d 00:08:11.190 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.190 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.190 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.190 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.190 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.190 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.190 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.190 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:11.190 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cece87ef-1f1e-4805-a234-bc87ccd5584d 00:08:11.190 request: 00:08:11.190 { 00:08:11.190 "uuid": "cece87ef-1f1e-4805-a234-bc87ccd5584d", 00:08:11.190 "method": "bdev_lvol_get_lvstores", 00:08:11.190 "req_id": 1 00:08:11.190 } 00:08:11.190 Got JSON-RPC error response 00:08:11.190 response: 00:08:11.190 { 00:08:11.190 "code": -19, 00:08:11.190 "message": "No such device" 00:08:11.190 } 00:08:11.448 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:11.448 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.448 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:11.448 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.448 04:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:11.448 aio_bdev 00:08:11.706 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cc9eb65c-9764-4b0f-a036-e071a4f3302c 00:08:11.706 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=cc9eb65c-9764-4b0f-a036-e071a4f3302c 00:08:11.706 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:11.706 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:11.706 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:11.706 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:11.706 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:11.964 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cc9eb65c-9764-4b0f-a036-e071a4f3302c -t 2000 00:08:12.223 [ 00:08:12.223 { 00:08:12.223 "name": "cc9eb65c-9764-4b0f-a036-e071a4f3302c", 00:08:12.223 "aliases": [ 00:08:12.223 "lvs/lvol" 00:08:12.223 ], 00:08:12.223 "product_name": "Logical Volume", 00:08:12.223 "block_size": 4096, 00:08:12.223 "num_blocks": 38912, 00:08:12.223 "uuid": "cc9eb65c-9764-4b0f-a036-e071a4f3302c", 00:08:12.223 "assigned_rate_limits": { 00:08:12.223 "rw_ios_per_sec": 0, 00:08:12.223 "rw_mbytes_per_sec": 0, 00:08:12.223 "r_mbytes_per_sec": 0, 00:08:12.223 "w_mbytes_per_sec": 0 00:08:12.223 }, 00:08:12.223 "claimed": false, 00:08:12.223 "zoned": false, 00:08:12.223 "supported_io_types": { 00:08:12.223 "read": true, 00:08:12.223 "write": true, 00:08:12.223 "unmap": true, 00:08:12.223 "flush": false, 00:08:12.223 "reset": true, 00:08:12.223 "nvme_admin": false, 00:08:12.223 "nvme_io": false, 00:08:12.223 "nvme_io_md": false, 00:08:12.223 "write_zeroes": true, 00:08:12.223 "zcopy": false, 00:08:12.223 "get_zone_info": false, 00:08:12.223 "zone_management": false, 00:08:12.223 "zone_append": false, 00:08:12.223 "compare": false, 00:08:12.223 "compare_and_write": false, 00:08:12.223 "abort": false, 00:08:12.223 "seek_hole": true, 00:08:12.223 "seek_data": true, 00:08:12.223 "copy": false, 00:08:12.223 "nvme_iov_md": false 00:08:12.223 }, 00:08:12.223 "driver_specific": { 00:08:12.223 "lvol": { 00:08:12.223 "lvol_store_uuid": "cece87ef-1f1e-4805-a234-bc87ccd5584d", 00:08:12.223 "base_bdev": "aio_bdev", 00:08:12.223 "thin_provision": false, 00:08:12.223 "num_allocated_clusters": 38, 00:08:12.223 "snapshot": false, 00:08:12.223 "clone": false, 00:08:12.223 "esnap_clone": false 00:08:12.223 } 00:08:12.223 } 00:08:12.223 } 00:08:12.223 ] 00:08:12.223 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:12.223 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cece87ef-1f1e-4805-a234-bc87ccd5584d 00:08:12.223 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:12.482 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:12.482 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cece87ef-1f1e-4805-a234-bc87ccd5584d 00:08:12.482 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:12.741 04:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:12.741 04:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cc9eb65c-9764-4b0f-a036-e071a4f3302c 00:08:12.999 04:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cece87ef-1f1e-4805-a234-bc87ccd5584d 00:08:13.258 04:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:13.516 04:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.516 00:08:13.516 real 0m18.534s 00:08:13.516 user 0m18.216s 00:08:13.516 sys 0m1.856s 00:08:13.516 04:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.516 04:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:13.516 ************************************ 00:08:13.516 END TEST lvs_grow_clean 00:08:13.516 ************************************ 00:08:13.516 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:13.516 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:13.516 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.516 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.516 ************************************ 00:08:13.516 START TEST lvs_grow_dirty 00:08:13.516 ************************************ 00:08:13.516 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:13.516 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:13.516 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:13.516 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:13.516 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:13.516 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:13.516 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:13.516 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.516 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.516 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:13.775 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:13.775 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:14.033 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=08904c2e-848c-4307-8f9e-63691dbdaed3 00:08:14.033 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08904c2e-848c-4307-8f9e-63691dbdaed3 00:08:14.033 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:14.293 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:14.293 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:14.293 04:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 08904c2e-848c-4307-8f9e-63691dbdaed3 lvol 150 00:08:14.859 04:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6b47917c-9c47-469a-960d-0f867657caab 00:08:14.859 04:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:14.859 04:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:14.859 [2024-10-28 04:44:05.411409] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:14.859 [2024-10-28 04:44:05.411525] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:14.859 true 00:08:14.859 04:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08904c2e-848c-4307-8f9e-63691dbdaed3 00:08:14.859 04:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:15.118 04:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:15.118 04:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:15.376 04:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6b47917c-9c47-469a-960d-0f867657caab 00:08:15.943 04:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:15.943 [2024-10-28 04:44:06.484073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.943 04:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:16.201 04:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2212194 00:08:16.201 04:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:16.201 04:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:16.201 04:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2212194 /var/tmp/bdevperf.sock 00:08:16.201 04:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2212194 ']' 00:08:16.201 04:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:16.201 04:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.201 04:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:16.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:16.201 04:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.201 04:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.460 [2024-10-28 04:44:06.812588] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:08:16.460 [2024-10-28 04:44:06.812681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2212194 ] 00:08:16.460 [2024-10-28 04:44:06.944352] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.460 [2024-10-28 04:44:06.983880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.460 [2024-10-28 04:44:07.033235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.393 04:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.393 04:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:17.393 04:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:17.650 Nvme0n1 00:08:17.908 04:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:18.166 [ 00:08:18.166 { 00:08:18.166 "name": "Nvme0n1", 00:08:18.166 "aliases": [ 00:08:18.166 "6b47917c-9c47-469a-960d-0f867657caab" 00:08:18.166 ], 00:08:18.166 "product_name": "NVMe disk", 00:08:18.166 "block_size": 4096, 00:08:18.166 "num_blocks": 38912, 00:08:18.166 "uuid": "6b47917c-9c47-469a-960d-0f867657caab", 00:08:18.166 "numa_id": 0, 00:08:18.166 "assigned_rate_limits": { 00:08:18.166 "rw_ios_per_sec": 0, 00:08:18.166 "rw_mbytes_per_sec": 0, 00:08:18.166 "r_mbytes_per_sec": 0, 00:08:18.166 "w_mbytes_per_sec": 0 00:08:18.166 }, 00:08:18.166 "claimed": false, 00:08:18.166 "zoned": false, 00:08:18.166 "supported_io_types": { 00:08:18.166 "read": true, 00:08:18.166 "write": true, 00:08:18.166 "unmap": true, 00:08:18.166 "flush": true, 00:08:18.166 "reset": true, 00:08:18.166 "nvme_admin": true, 00:08:18.166 "nvme_io": true, 00:08:18.166 "nvme_io_md": false, 00:08:18.166 "write_zeroes": true, 00:08:18.166 "zcopy": false, 00:08:18.166 "get_zone_info": false, 00:08:18.166 "zone_management": false, 00:08:18.166 "zone_append": false, 00:08:18.166 "compare": true, 00:08:18.166 "compare_and_write": true, 00:08:18.166 "abort": true, 00:08:18.166 "seek_hole": false, 00:08:18.166 "seek_data": false, 00:08:18.166 "copy": true, 00:08:18.166 "nvme_iov_md": false 00:08:18.166 }, 00:08:18.166 "memory_domains": [ 00:08:18.166 { 00:08:18.166 "dma_device_id": "system", 00:08:18.166 "dma_device_type": 1 00:08:18.166 } 00:08:18.166 ], 00:08:18.166 "driver_specific": { 00:08:18.166 "nvme": [ 00:08:18.166 { 00:08:18.166 "trid": { 00:08:18.166 "trtype": "TCP", 00:08:18.166 "adrfam": "IPv4", 00:08:18.166 "traddr": "10.0.0.2", 00:08:18.166 "trsvcid": "4420", 00:08:18.166 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:18.166 }, 00:08:18.166 "ctrlr_data": { 00:08:18.166 "cntlid": 1, 00:08:18.166 "vendor_id": "0x8086", 00:08:18.166 "model_number": "SPDK bdev Controller", 00:08:18.166 "serial_number": "SPDK0", 00:08:18.166 "firmware_revision": "25.01", 00:08:18.166 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:18.166 "oacs": { 00:08:18.166 "security": 0, 00:08:18.166 "format": 0, 00:08:18.166 "firmware": 0, 00:08:18.166 "ns_manage": 0 00:08:18.166 }, 00:08:18.166 "multi_ctrlr": true, 00:08:18.166 "ana_reporting": false 00:08:18.166 }, 00:08:18.166 "vs": { 00:08:18.166 "nvme_version": "1.3" 00:08:18.166 }, 00:08:18.166 "ns_data": { 00:08:18.166 "id": 1, 00:08:18.166 "can_share": true 00:08:18.166 } 00:08:18.166 } 00:08:18.166 ], 00:08:18.166 "mp_policy": "active_passive" 00:08:18.166 } 00:08:18.166 } 00:08:18.166 ] 00:08:18.166 04:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2212342 00:08:18.166 04:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:18.166 04:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:18.166 Running I/O for 10 seconds... 00:08:19.101 Latency(us) 00:08:19.101 [2024-10-28T03:44:09.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.101 Nvme0n1 : 1.00 14101.00 55.08 0.00 0.00 0.00 0.00 0.00 00:08:19.101 [2024-10-28T03:44:09.697Z] =================================================================================================================== 00:08:19.101 [2024-10-28T03:44:09.697Z] Total : 14101.00 55.08 0.00 0.00 0.00 0.00 0.00 00:08:19.101 00:08:20.035 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 08904c2e-848c-4307-8f9e-63691dbdaed3 00:08:20.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.293 Nvme0n1 : 2.00 14131.50 55.20 0.00 0.00 0.00 0.00 0.00 00:08:20.293 [2024-10-28T03:44:10.889Z] =================================================================================================================== 00:08:20.293 [2024-10-28T03:44:10.889Z] Total : 14131.50 55.20 0.00 0.00 0.00 0.00 0.00 00:08:20.293 00:08:20.293 true 00:08:20.293 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08904c2e-848c-4307-8f9e-63691dbdaed3 00:08:20.293 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:20.552 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:20.552 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:20.552 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2212342 00:08:21.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.145 Nvme0n1 : 3.00 14268.33 55.74 0.00 0.00 0.00 0.00 0.00 00:08:21.145 [2024-10-28T03:44:11.741Z] =================================================================================================================== 00:08:21.145 [2024-10-28T03:44:11.741Z] Total : 14268.33 55.74 0.00 0.00 0.00 0.00 0.00 00:08:21.145 00:08:22.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.081 Nvme0n1 : 4.00 14368.25 56.13 0.00 0.00 0.00 0.00 0.00 00:08:22.081 [2024-10-28T03:44:12.677Z] =================================================================================================================== 00:08:22.081 [2024-10-28T03:44:12.677Z] Total : 14368.25 56.13 0.00 0.00 0.00 0.00 0.00 00:08:22.081 00:08:23.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.457 Nvme0n1 : 5.00 14403.20 56.26 0.00 0.00 0.00 0.00 0.00 00:08:23.457 [2024-10-28T03:44:14.053Z] =================================================================================================================== 00:08:23.457 [2024-10-28T03:44:14.053Z] Total : 14403.20 56.26 0.00 0.00 0.00 0.00 0.00 00:08:23.457 00:08:24.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.392 Nvme0n1 : 6.00 14468.83 56.52 0.00 0.00 0.00 0.00 0.00 00:08:24.392 [2024-10-28T03:44:14.988Z] =================================================================================================================== 00:08:24.392 [2024-10-28T03:44:14.988Z] Total : 14468.83 56.52 0.00 0.00 0.00 0.00 0.00 00:08:24.392 00:08:25.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.328 Nvme0n1 : 7.00 14516.14 56.70 0.00 0.00 0.00 0.00 0.00 00:08:25.328 [2024-10-28T03:44:15.924Z] =================================================================================================================== 00:08:25.328 [2024-10-28T03:44:15.924Z] Total : 14516.14 56.70 0.00 0.00 0.00 0.00 0.00 00:08:25.328 00:08:26.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.263 Nvme0n1 : 8.00 14551.25 56.84 0.00 0.00 0.00 0.00 0.00 00:08:26.263 [2024-10-28T03:44:16.859Z] =================================================================================================================== 00:08:26.263 [2024-10-28T03:44:16.859Z] Total : 14551.25 56.84 0.00 0.00 0.00 0.00 0.00 00:08:26.263 00:08:27.198 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.198 Nvme0n1 : 9.00 14585.67 56.98 0.00 0.00 0.00 0.00 0.00 00:08:27.198 [2024-10-28T03:44:17.794Z] =================================================================================================================== 00:08:27.198 [2024-10-28T03:44:17.794Z] Total : 14585.67 56.98 0.00 0.00 0.00 0.00 0.00 00:08:27.198 00:08:28.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.132 Nvme0n1 : 10.00 14601.20 57.04 0.00 0.00 0.00 0.00 0.00 00:08:28.132 [2024-10-28T03:44:18.728Z] =================================================================================================================== 00:08:28.132 [2024-10-28T03:44:18.728Z] Total : 14601.20 57.04 0.00 0.00 0.00 0.00 0.00 00:08:28.132 00:08:28.132 00:08:28.132 Latency(us) 00:08:28.132 [2024-10-28T03:44:18.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.132 Nvme0n1 : 10.01 14607.44 57.06 0.00 0.00 8758.04 5231.20 16545.19 00:08:28.132 [2024-10-28T03:44:18.728Z] =================================================================================================================== 00:08:28.132 [2024-10-28T03:44:18.728Z] Total : 14607.44 57.06 0.00 0.00 8758.04 5231.20 16545.19 00:08:28.132 { 00:08:28.132 "results": [ 00:08:28.132 { 00:08:28.132 "job": "Nvme0n1", 00:08:28.132 "core_mask": "0x2", 00:08:28.132 "workload": "randwrite", 00:08:28.132 "status": "finished", 00:08:28.132 "queue_depth": 128, 00:08:28.132 "io_size": 4096, 00:08:28.132 "runtime": 10.008802, 00:08:28.132 "iops": 14607.442529085898, 00:08:28.132 "mibps": 57.06032237924179, 00:08:28.132 "io_failed": 0, 00:08:28.132 "io_timeout": 0, 00:08:28.132 "avg_latency_us": 8758.035348530868, 00:08:28.132 "min_latency_us": 5231.201039539633, 00:08:28.132 "max_latency_us": 16545.1939855207 00:08:28.132 } 00:08:28.132 ], 00:08:28.132 "core_count": 1 00:08:28.133 } 00:08:28.133 04:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2212194 00:08:28.133 04:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2212194 ']' 00:08:28.133 04:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2212194 00:08:28.133 04:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:28.133 04:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.133 04:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2212194 00:08:28.133 04:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:28.133 04:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:28.133 04:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2212194' 00:08:28.133 killing process with pid 2212194 00:08:28.133 04:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2212194 00:08:28.133 Received shutdown signal, test time was about 10.000000 seconds 00:08:28.133 00:08:28.133 Latency(us) 00:08:28.133 [2024-10-28T03:44:18.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.133 [2024-10-28T03:44:18.729Z] =================================================================================================================== 00:08:28.133 [2024-10-28T03:44:18.729Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:28.133 04:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2212194 00:08:28.391 04:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.649 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:28.906 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08904c2e-848c-4307-8f9e-63691dbdaed3 00:08:28.906 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:29.165 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:29.165 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:29.165 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2209481 00:08:29.165 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2209481 00:08:29.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2209481 Killed "${NVMF_APP[@]}" "$@" 00:08:29.423 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:29.423 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:29.423 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:29.423 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:29.423 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:29.424 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=2213639 00:08:29.424 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:29.424 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 2213639 00:08:29.424 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2213639 ']' 00:08:29.424 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.424 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.424 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.424 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.424 04:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:29.424 [2024-10-28 04:44:19.823064] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:08:29.424 [2024-10-28 04:44:19.823162] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.424 [2024-10-28 04:44:19.963774] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:29.424 [2024-10-28 04:44:20.001080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.683 [2024-10-28 04:44:20.053010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.683 [2024-10-28 04:44:20.053071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.683 [2024-10-28 04:44:20.053099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.683 [2024-10-28 04:44:20.053112] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.683 [2024-10-28 04:44:20.053122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.683 [2024-10-28 04:44:20.053793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.248 04:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.248 04:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:30.248 04:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:30.248 04:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:30.248 04:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:30.506 04:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.506 04:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:30.767 [2024-10-28 04:44:21.122896] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:30.767 [2024-10-28 04:44:21.123055] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:30.767 [2024-10-28 04:44:21.123102] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:30.767 04:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:30.767 04:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6b47917c-9c47-469a-960d-0f867657caab 00:08:30.767 04:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=6b47917c-9c47-469a-960d-0f867657caab 00:08:30.767 04:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.767 04:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:30.767 04:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.767 04:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.767 04:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:31.026 04:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6b47917c-9c47-469a-960d-0f867657caab -t 2000 00:08:31.284 [ 00:08:31.284 { 00:08:31.284 "name": "6b47917c-9c47-469a-960d-0f867657caab", 00:08:31.284 "aliases": [ 00:08:31.284 "lvs/lvol" 00:08:31.284 ], 00:08:31.284 "product_name": "Logical Volume", 00:08:31.284 "block_size": 4096, 00:08:31.284 "num_blocks": 38912, 00:08:31.284 "uuid": "6b47917c-9c47-469a-960d-0f867657caab", 00:08:31.284 "assigned_rate_limits": { 00:08:31.284 "rw_ios_per_sec": 0, 00:08:31.284 "rw_mbytes_per_sec": 0, 00:08:31.284 "r_mbytes_per_sec": 0, 00:08:31.284 "w_mbytes_per_sec": 0 00:08:31.284 }, 00:08:31.284 "claimed": false, 00:08:31.284 "zoned": false, 00:08:31.284 "supported_io_types": { 00:08:31.284 "read": true, 00:08:31.284 "write": true, 00:08:31.284 "unmap": true, 00:08:31.284 "flush": false, 00:08:31.284 "reset": true, 00:08:31.284 "nvme_admin": false, 00:08:31.284 "nvme_io": false, 00:08:31.284 "nvme_io_md": false, 00:08:31.284 "write_zeroes": true, 00:08:31.284 "zcopy": false, 00:08:31.284 "get_zone_info": false, 00:08:31.284 "zone_management": false, 00:08:31.284 "zone_append": false, 00:08:31.284 "compare": false, 00:08:31.284 "compare_and_write": false, 00:08:31.284 "abort": false, 00:08:31.284 "seek_hole": true, 00:08:31.284 "seek_data": true, 00:08:31.284 "copy": false, 00:08:31.284 "nvme_iov_md": false 00:08:31.284 }, 00:08:31.284 "driver_specific": { 00:08:31.284 "lvol": { 00:08:31.284 "lvol_store_uuid": "08904c2e-848c-4307-8f9e-63691dbdaed3", 00:08:31.284 "base_bdev": "aio_bdev", 00:08:31.284 "thin_provision": false, 00:08:31.284 "num_allocated_clusters": 38, 00:08:31.284 "snapshot": false, 00:08:31.284 "clone": false, 00:08:31.284 "esnap_clone": false 00:08:31.284 } 00:08:31.284 } 00:08:31.284 } 00:08:31.284 ] 00:08:31.284 04:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:31.285 04:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08904c2e-848c-4307-8f9e-63691dbdaed3 00:08:31.285 04:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:31.542 04:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:31.543 04:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08904c2e-848c-4307-8f9e-63691dbdaed3 00:08:31.543 04:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:31.801 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:31.801 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:32.059 [2024-10-28 04:44:22.477268] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:32.059 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08904c2e-848c-4307-8f9e-63691dbdaed3 00:08:32.059 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:32.059 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08904c2e-848c-4307-8f9e-63691dbdaed3 00:08:32.059 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:32.059 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.059 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:32.059 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.059 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:32.059 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.059 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:32.059 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:32.059 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08904c2e-848c-4307-8f9e-63691dbdaed3 00:08:32.317 request: 00:08:32.317 { 00:08:32.317 "uuid": "08904c2e-848c-4307-8f9e-63691dbdaed3", 00:08:32.317 "method": "bdev_lvol_get_lvstores", 00:08:32.317 "req_id": 1 00:08:32.317 } 00:08:32.317 Got JSON-RPC error response 00:08:32.317 response: 00:08:32.317 { 00:08:32.317 "code": -19, 00:08:32.317 "message": "No such device" 00:08:32.317 } 00:08:32.317 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:32.317 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:32.317 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:32.317 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:32.317 04:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:32.575 aio_bdev 00:08:32.575 04:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6b47917c-9c47-469a-960d-0f867657caab 00:08:32.575 04:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=6b47917c-9c47-469a-960d-0f867657caab 00:08:32.575 04:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:32.575 04:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:32.575 04:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:32.575 04:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:32.575 04:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:32.847 04:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6b47917c-9c47-469a-960d-0f867657caab -t 2000 00:08:33.153 [ 00:08:33.153 { 00:08:33.153 "name": "6b47917c-9c47-469a-960d-0f867657caab", 00:08:33.153 "aliases": [ 00:08:33.153 "lvs/lvol" 00:08:33.153 ], 00:08:33.153 "product_name": "Logical Volume", 00:08:33.153 "block_size": 4096, 00:08:33.153 "num_blocks": 38912, 00:08:33.153 "uuid": "6b47917c-9c47-469a-960d-0f867657caab", 00:08:33.153 "assigned_rate_limits": { 00:08:33.153 "rw_ios_per_sec": 0, 00:08:33.153 "rw_mbytes_per_sec": 0, 00:08:33.153 "r_mbytes_per_sec": 0, 00:08:33.153 "w_mbytes_per_sec": 0 00:08:33.153 }, 00:08:33.153 "claimed": false, 00:08:33.153 "zoned": false, 00:08:33.153 "supported_io_types": { 00:08:33.153 "read": true, 00:08:33.153 "write": true, 00:08:33.153 "unmap": true, 00:08:33.153 "flush": false, 00:08:33.153 "reset": true, 00:08:33.153 "nvme_admin": false, 00:08:33.153 "nvme_io": false, 00:08:33.153 "nvme_io_md": false, 00:08:33.153 "write_zeroes": true, 00:08:33.153 "zcopy": false, 00:08:33.153 "get_zone_info": false, 00:08:33.153 "zone_management": false, 00:08:33.153 "zone_append": false, 00:08:33.153 "compare": false, 00:08:33.153 "compare_and_write": false, 00:08:33.153 "abort": false, 00:08:33.153 "seek_hole": true, 00:08:33.153 "seek_data": true, 00:08:33.153 "copy": false, 00:08:33.153 "nvme_iov_md": false 00:08:33.153 }, 00:08:33.153 "driver_specific": { 00:08:33.153 "lvol": { 00:08:33.153 "lvol_store_uuid": "08904c2e-848c-4307-8f9e-63691dbdaed3", 00:08:33.153 "base_bdev": "aio_bdev", 00:08:33.153 "thin_provision": false, 00:08:33.153 "num_allocated_clusters": 38, 00:08:33.153 "snapshot": false, 00:08:33.153 "clone": false, 00:08:33.153 "esnap_clone": false 00:08:33.153 } 00:08:33.153 } 00:08:33.153 } 00:08:33.153 ] 00:08:33.153 04:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:33.153 04:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08904c2e-848c-4307-8f9e-63691dbdaed3 00:08:33.153 04:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:33.411 04:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:33.411 04:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08904c2e-848c-4307-8f9e-63691dbdaed3 00:08:33.411 04:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:33.669 04:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:33.669 04:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6b47917c-9c47-469a-960d-0f867657caab 00:08:33.928 04:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08904c2e-848c-4307-8f9e-63691dbdaed3 00:08:34.186 04:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:34.444 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:34.444 00:08:34.444 real 0m20.999s 00:08:34.444 user 0m52.544s 00:08:34.444 sys 0m4.685s 00:08:34.444 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.444 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:34.444 ************************************ 00:08:34.444 END TEST lvs_grow_dirty 00:08:34.444 ************************************ 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:34.702 nvmf_trace.0 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:34.702 rmmod nvme_tcp 00:08:34.702 rmmod nvme_fabrics 00:08:34.702 rmmod nvme_keyring 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 2213639 ']' 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 2213639 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2213639 ']' 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2213639 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2213639 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.702 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:34.703 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2213639' 00:08:34.703 killing process with pid 2213639 00:08:34.703 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2213639 00:08:34.703 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2213639 00:08:34.961 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:34.961 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:34.961 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:34.961 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:34.961 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:08:34.961 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:34.961 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:08:34.961 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.961 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:34.961 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.961 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.961 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.866 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:37.126 00:08:37.126 real 0m45.762s 00:08:37.126 user 1m17.682s 00:08:37.126 sys 0m8.469s 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:37.126 ************************************ 00:08:37.126 END TEST nvmf_lvs_grow 00:08:37.126 ************************************ 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.126 ************************************ 00:08:37.126 START TEST nvmf_bdev_io_wait 00:08:37.126 ************************************ 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:37.126 * Looking for test storage... 00:08:37.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # lcov --version 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:37.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.126 --rc genhtml_branch_coverage=1 00:08:37.126 --rc genhtml_function_coverage=1 00:08:37.126 --rc genhtml_legend=1 00:08:37.126 --rc geninfo_all_blocks=1 00:08:37.126 --rc geninfo_unexecuted_blocks=1 00:08:37.126 00:08:37.126 ' 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:37.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.126 --rc genhtml_branch_coverage=1 00:08:37.126 --rc genhtml_function_coverage=1 00:08:37.126 --rc genhtml_legend=1 00:08:37.126 --rc geninfo_all_blocks=1 00:08:37.126 --rc geninfo_unexecuted_blocks=1 00:08:37.126 00:08:37.126 ' 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:37.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.126 --rc genhtml_branch_coverage=1 00:08:37.126 --rc genhtml_function_coverage=1 00:08:37.126 --rc genhtml_legend=1 00:08:37.126 --rc geninfo_all_blocks=1 00:08:37.126 --rc geninfo_unexecuted_blocks=1 00:08:37.126 00:08:37.126 ' 00:08:37.126 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:37.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.126 --rc genhtml_branch_coverage=1 00:08:37.126 --rc genhtml_function_coverage=1 00:08:37.126 --rc genhtml_legend=1 00:08:37.126 --rc geninfo_all_blocks=1 00:08:37.127 --rc geninfo_unexecuted_blocks=1 00:08:37.127 00:08:37.127 ' 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.127 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.660 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.660 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:39.660 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:39.660 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:39.660 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:39.660 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:39.660 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:39.660 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:39.660 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:39.660 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:39.660 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:39.661 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:39.661 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:39.661 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:39.661 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:39.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:08:39.661 00:08:39.661 --- 10.0.0.2 ping statistics --- 00:08:39.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.661 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:08:39.661 00:08:39.661 --- 10.0.0.1 ping statistics --- 00:08:39.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.661 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.661 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=2216290 00:08:39.662 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:39.662 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 2216290 00:08:39.662 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2216290 ']' 00:08:39.662 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.662 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.662 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.662 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.662 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.662 [2024-10-28 04:44:29.879858] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:08:39.662 [2024-10-28 04:44:29.879939] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.662 [2024-10-28 04:44:30.020226] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:39.662 [2024-10-28 04:44:30.061305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.662 [2024-10-28 04:44:30.114698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.662 [2024-10-28 04:44:30.114749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.662 [2024-10-28 04:44:30.114764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.662 [2024-10-28 04:44:30.114776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.662 [2024-10-28 04:44:30.114787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.662 [2024-10-28 04:44:30.116534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.662 [2024-10-28 04:44:30.116600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.662 [2024-10-28 04:44:30.116706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.662 [2024-10-28 04:44:30.116709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.596 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.596 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:40.596 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:40.596 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:40.596 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.596 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.596 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:40.596 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.596 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.596 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.596 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:40.596 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.596 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.596 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.596 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.596 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.597 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.597 [2024-10-28 04:44:30.993980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.597 Malloc0 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.597 [2024-10-28 04:44:31.047149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2216441 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2216442 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2216444 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:40.597 { 00:08:40.597 "params": { 00:08:40.597 "name": "Nvme$subsystem", 00:08:40.597 "trtype": "$TEST_TRANSPORT", 00:08:40.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.597 "adrfam": "ipv4", 00:08:40.597 "trsvcid": "$NVMF_PORT", 00:08:40.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.597 "hdgst": ${hdgst:-false}, 00:08:40.597 "ddgst": ${ddgst:-false} 00:08:40.597 }, 00:08:40.597 "method": "bdev_nvme_attach_controller" 00:08:40.597 } 00:08:40.597 EOF 00:08:40.597 )") 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2216447 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:40.597 { 00:08:40.597 "params": { 00:08:40.597 "name": "Nvme$subsystem", 00:08:40.597 "trtype": "$TEST_TRANSPORT", 00:08:40.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.597 "adrfam": "ipv4", 00:08:40.597 "trsvcid": "$NVMF_PORT", 00:08:40.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.597 "hdgst": ${hdgst:-false}, 00:08:40.597 "ddgst": ${ddgst:-false} 00:08:40.597 }, 00:08:40.597 "method": "bdev_nvme_attach_controller" 00:08:40.597 } 00:08:40.597 EOF 00:08:40.597 )") 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:40.597 { 00:08:40.597 "params": { 00:08:40.597 "name": "Nvme$subsystem", 00:08:40.597 "trtype": "$TEST_TRANSPORT", 00:08:40.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.597 "adrfam": "ipv4", 00:08:40.597 "trsvcid": "$NVMF_PORT", 00:08:40.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.597 "hdgst": ${hdgst:-false}, 00:08:40.597 "ddgst": ${ddgst:-false} 00:08:40.597 }, 00:08:40.597 "method": "bdev_nvme_attach_controller" 00:08:40.597 } 00:08:40.597 EOF 00:08:40.597 )") 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:40.597 { 00:08:40.597 "params": { 00:08:40.597 "name": "Nvme$subsystem", 00:08:40.597 "trtype": "$TEST_TRANSPORT", 00:08:40.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.597 "adrfam": "ipv4", 00:08:40.597 "trsvcid": "$NVMF_PORT", 00:08:40.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.597 "hdgst": ${hdgst:-false}, 00:08:40.597 "ddgst": ${ddgst:-false} 00:08:40.597 }, 00:08:40.597 "method": "bdev_nvme_attach_controller" 00:08:40.597 } 00:08:40.597 EOF 00:08:40.597 )") 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:40.597 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:40.598 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2216441 00:08:40.598 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:40.598 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:40.598 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:40.598 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:40.598 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:40.598 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:40.598 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:40.598 "params": { 00:08:40.598 "name": "Nvme1", 00:08:40.598 "trtype": "tcp", 00:08:40.598 "traddr": "10.0.0.2", 00:08:40.598 "adrfam": "ipv4", 00:08:40.598 "trsvcid": "4420", 00:08:40.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.598 "hdgst": false, 00:08:40.598 "ddgst": false 00:08:40.598 }, 00:08:40.598 "method": "bdev_nvme_attach_controller" 00:08:40.598 }' 00:08:40.598 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:40.598 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:40.598 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:40.598 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:40.598 "params": { 00:08:40.598 "name": "Nvme1", 00:08:40.598 "trtype": "tcp", 00:08:40.598 "traddr": "10.0.0.2", 00:08:40.598 "adrfam": "ipv4", 00:08:40.598 "trsvcid": "4420", 00:08:40.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.598 "hdgst": false, 00:08:40.598 "ddgst": false 00:08:40.598 }, 00:08:40.598 "method": "bdev_nvme_attach_controller" 00:08:40.598 }' 00:08:40.598 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:40.598 "params": { 00:08:40.598 "name": "Nvme1", 00:08:40.598 "trtype": "tcp", 00:08:40.598 "traddr": "10.0.0.2", 00:08:40.598 "adrfam": "ipv4", 00:08:40.598 "trsvcid": "4420", 00:08:40.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.598 "hdgst": false, 00:08:40.598 "ddgst": false 00:08:40.598 }, 00:08:40.598 "method": "bdev_nvme_attach_controller" 00:08:40.598 }' 00:08:40.598 04:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:40.598 "params": { 00:08:40.598 "name": "Nvme1", 00:08:40.598 "trtype": "tcp", 00:08:40.598 "traddr": "10.0.0.2", 00:08:40.598 "adrfam": "ipv4", 00:08:40.598 "trsvcid": "4420", 00:08:40.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.598 "hdgst": false, 00:08:40.598 "ddgst": false 00:08:40.598 }, 00:08:40.598 "method": "bdev_nvme_attach_controller" 00:08:40.598 }' 00:08:40.598 [2024-10-28 04:44:31.098319] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:08:40.598 [2024-10-28 04:44:31.098362] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:08:40.598 [2024-10-28 04:44:31.098362] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:08:40.598 [2024-10-28 04:44:31.098364] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:08:40.598 [2024-10-28 04:44:31.098407] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:40.598 [2024-10-28 04:44:31.098452] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-28 04:44:31.098452] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-28 04:44:31.098452] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:40.598 --proc-type=auto ] 00:08:40.598 --proc-type=auto ] 00:08:40.856 [2024-10-28 04:44:31.346857] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:40.856 [2024-10-28 04:44:31.384273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.856 [2024-10-28 04:44:31.425961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:40.856 [2024-10-28 04:44:31.449767] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:41.115 [2024-10-28 04:44:31.488826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.115 [2024-10-28 04:44:31.532883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:41.115 [2024-10-28 04:44:31.547784] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:41.115 [2024-10-28 04:44:31.586612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.115 [2024-10-28 04:44:31.621101] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:41.115 [2024-10-28 04:44:31.630814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:41.115 [2024-10-28 04:44:31.659764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.115 [2024-10-28 04:44:31.698321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:41.373 Running I/O for 1 seconds... 00:08:41.373 Running I/O for 1 seconds... 00:08:41.373 Running I/O for 1 seconds... 00:08:41.373 Running I/O for 1 seconds... 00:08:42.308 10120.00 IOPS, 39.53 MiB/s 00:08:42.308 Latency(us) 00:08:42.308 [2024-10-28T03:44:32.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.308 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:42.308 Nvme1n1 : 1.01 10166.68 39.71 0.00 0.00 12536.77 6910.05 18783.66 00:08:42.308 [2024-10-28T03:44:32.904Z] =================================================================================================================== 00:08:42.308 [2024-10-28T03:44:32.904Z] Total : 10166.68 39.71 0.00 0.00 12536.77 6910.05 18783.66 00:08:42.308 8547.00 IOPS, 33.39 MiB/s 00:08:42.309 Latency(us) 00:08:42.309 [2024-10-28T03:44:32.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.309 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:42.309 Nvme1n1 : 1.01 8609.96 33.63 0.00 0.00 14799.10 5863.81 24136.52 00:08:42.309 [2024-10-28T03:44:32.905Z] =================================================================================================================== 00:08:42.309 [2024-10-28T03:44:32.905Z] Total : 8609.96 33.63 0.00 0.00 14799.10 5863.81 24136.52 00:08:42.309 82000.00 IOPS, 320.31 MiB/s 00:08:42.309 Latency(us) 00:08:42.309 [2024-10-28T03:44:32.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.309 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:42.309 Nvme1n1 : 1.00 81846.68 319.71 0.00 0.00 1555.24 296.54 3406.36 00:08:42.309 [2024-10-28T03:44:32.905Z] =================================================================================================================== 00:08:42.309 [2024-10-28T03:44:32.905Z] Total : 81846.68 319.71 0.00 0.00 1555.24 296.54 3406.36 00:08:42.567 9290.00 IOPS, 36.29 MiB/s 00:08:42.567 Latency(us) 00:08:42.567 [2024-10-28T03:44:33.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.567 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:42.567 Nvme1n1 : 1.01 9357.27 36.55 0.00 0.00 13624.95 5450.18 24623.14 00:08:42.567 [2024-10-28T03:44:33.163Z] =================================================================================================================== 00:08:42.567 [2024-10-28T03:44:33.163Z] Total : 9357.27 36.55 0.00 0.00 13624.95 5450.18 24623.14 00:08:42.567 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2216442 00:08:42.567 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2216444 00:08:42.567 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2216447 00:08:42.567 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.567 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.567 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:42.567 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.567 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:42.567 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:42.567 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:42.567 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:42.567 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.567 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:42.567 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.567 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.567 rmmod nvme_tcp 00:08:42.567 rmmod nvme_fabrics 00:08:42.567 rmmod nvme_keyring 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 2216290 ']' 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 2216290 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2216290 ']' 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2216290 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2216290 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2216290' 00:08:42.827 killing process with pid 2216290 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2216290 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2216290 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.827 04:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.360 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:45.360 00:08:45.360 real 0m7.938s 00:08:45.361 user 0m17.740s 00:08:45.361 sys 0m4.051s 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.361 ************************************ 00:08:45.361 END TEST nvmf_bdev_io_wait 00:08:45.361 ************************************ 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.361 ************************************ 00:08:45.361 START TEST nvmf_queue_depth 00:08:45.361 ************************************ 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:45.361 * Looking for test storage... 00:08:45.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # lcov --version 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:45.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.361 --rc genhtml_branch_coverage=1 00:08:45.361 --rc genhtml_function_coverage=1 00:08:45.361 --rc genhtml_legend=1 00:08:45.361 --rc geninfo_all_blocks=1 00:08:45.361 --rc geninfo_unexecuted_blocks=1 00:08:45.361 00:08:45.361 ' 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:45.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.361 --rc genhtml_branch_coverage=1 00:08:45.361 --rc genhtml_function_coverage=1 00:08:45.361 --rc genhtml_legend=1 00:08:45.361 --rc geninfo_all_blocks=1 00:08:45.361 --rc geninfo_unexecuted_blocks=1 00:08:45.361 00:08:45.361 ' 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:45.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.361 --rc genhtml_branch_coverage=1 00:08:45.361 --rc genhtml_function_coverage=1 00:08:45.361 --rc genhtml_legend=1 00:08:45.361 --rc geninfo_all_blocks=1 00:08:45.361 --rc geninfo_unexecuted_blocks=1 00:08:45.361 00:08:45.361 ' 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:45.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.361 --rc genhtml_branch_coverage=1 00:08:45.361 --rc genhtml_function_coverage=1 00:08:45.361 --rc genhtml_legend=1 00:08:45.361 --rc geninfo_all_blocks=1 00:08:45.361 --rc geninfo_unexecuted_blocks=1 00:08:45.361 00:08:45.361 ' 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.361 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.362 04:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.266 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.266 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:47.266 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:47.266 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:47.266 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:47.266 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:47.266 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:47.266 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:47.266 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:47.266 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:47.266 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:47.267 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:47.267 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:47.267 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:47.267 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:47.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:08:47.267 00:08:47.267 --- 10.0.0.2 ping statistics --- 00:08:47.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.267 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:08:47.267 00:08:47.267 --- 10.0.0.1 ping statistics --- 00:08:47.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.267 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=2218650 00:08:47.267 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:47.268 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 2218650 00:08:47.268 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2218650 ']' 00:08:47.268 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.268 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.268 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.268 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.268 04:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.527 [2024-10-28 04:44:37.868112] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:08:47.527 [2024-10-28 04:44:37.868202] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.527 [2024-10-28 04:44:38.017295] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:47.527 [2024-10-28 04:44:38.060286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.527 [2024-10-28 04:44:38.110586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.527 [2024-10-28 04:44:38.110668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.527 [2024-10-28 04:44:38.110685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.527 [2024-10-28 04:44:38.110713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.527 [2024-10-28 04:44:38.110723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.527 [2024-10-28 04:44:38.111377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.785 [2024-10-28 04:44:38.266813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.785 Malloc0 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.785 [2024-10-28 04:44:38.316606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2218798 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2218798 /var/tmp/bdevperf.sock 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2218798 ']' 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:47.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.785 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.785 [2024-10-28 04:44:38.368982] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:08:47.785 [2024-10-28 04:44:38.369056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218798 ] 00:08:48.043 [2024-10-28 04:44:38.505749] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.043 [2024-10-28 04:44:38.545111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.043 [2024-10-28 04:44:38.594403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.302 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.302 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:48.302 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:48.302 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.302 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.302 NVMe0n1 00:08:48.302 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.302 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:48.302 Running I/O for 10 seconds... 00:08:50.614 7557.00 IOPS, 29.52 MiB/s [2024-10-28T03:44:42.144Z] 7798.00 IOPS, 30.46 MiB/s [2024-10-28T03:44:43.079Z] 7937.33 IOPS, 31.01 MiB/s [2024-10-28T03:44:44.015Z] 8002.75 IOPS, 31.26 MiB/s [2024-10-28T03:44:44.949Z] 8112.80 IOPS, 31.69 MiB/s [2024-10-28T03:44:46.325Z] 8123.83 IOPS, 31.73 MiB/s [2024-10-28T03:44:46.892Z] 8173.57 IOPS, 31.93 MiB/s [2024-10-28T03:44:48.268Z] 8177.12 IOPS, 31.94 MiB/s [2024-10-28T03:44:49.203Z] 8179.56 IOPS, 31.95 MiB/s [2024-10-28T03:44:49.203Z] 8183.60 IOPS, 31.97 MiB/s 00:08:58.607 Latency(us) 00:08:58.607 [2024-10-28T03:44:49.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.607 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:58.607 Verification LBA range: start 0x0 length 0x4000 00:08:58.607 NVMe0n1 : 10.08 8218.30 32.10 0.00 0.00 124057.45 22968.62 82142.02 00:08:58.607 [2024-10-28T03:44:49.203Z] =================================================================================================================== 00:08:58.607 [2024-10-28T03:44:49.203Z] Total : 8218.30 32.10 0.00 0.00 124057.45 22968.62 82142.02 00:08:58.607 { 00:08:58.607 "results": [ 00:08:58.607 { 00:08:58.607 "job": "NVMe0n1", 00:08:58.607 "core_mask": "0x1", 00:08:58.607 "workload": "verify", 00:08:58.607 "status": "finished", 00:08:58.607 "verify_range": { 00:08:58.607 "start": 0, 00:08:58.607 "length": 16384 00:08:58.607 }, 00:08:58.607 "queue_depth": 1024, 00:08:58.607 "io_size": 4096, 00:08:58.607 "runtime": 10.082379, 00:08:58.607 "iops": 8218.298478960174, 00:08:58.607 "mibps": 32.10272843343818, 00:08:58.607 "io_failed": 0, 00:08:58.607 "io_timeout": 0, 00:08:58.607 "avg_latency_us": 124057.45475049422, 00:08:58.607 "min_latency_us": 22968.62223872285, 00:08:58.607 "max_latency_us": 82142.02190458511 00:08:58.607 } 00:08:58.607 ], 00:08:58.607 "core_count": 1 00:08:58.607 } 00:08:58.607 04:44:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2218798 00:08:58.607 04:44:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2218798 ']' 00:08:58.607 04:44:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2218798 00:08:58.607 04:44:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:58.607 04:44:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:58.607 04:44:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2218798 00:08:58.607 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:58.607 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:58.607 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2218798' 00:08:58.607 killing process with pid 2218798 00:08:58.607 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2218798 00:08:58.607 Received shutdown signal, test time was about 10.000000 seconds 00:08:58.607 00:08:58.607 Latency(us) 00:08:58.607 [2024-10-28T03:44:49.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.607 [2024-10-28T03:44:49.203Z] =================================================================================================================== 00:08:58.607 [2024-10-28T03:44:49.203Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:58.607 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2218798 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:58.866 rmmod nvme_tcp 00:08:58.866 rmmod nvme_fabrics 00:08:58.866 rmmod nvme_keyring 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 2218650 ']' 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 2218650 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2218650 ']' 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2218650 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2218650 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2218650' 00:08:58.866 killing process with pid 2218650 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2218650 00:08:58.866 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2218650 00:08:59.126 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:59.126 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:59.126 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:59.126 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:59.126 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:59.126 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:59.126 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:59.126 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:59.126 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:59.126 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.126 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.126 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.033 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:01.033 00:09:01.033 real 0m16.070s 00:09:01.033 user 0m22.438s 00:09:01.033 sys 0m3.079s 00:09:01.033 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.033 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.033 ************************************ 00:09:01.033 END TEST nvmf_queue_depth 00:09:01.033 ************************************ 00:09:01.033 04:44:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:01.033 04:44:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:01.033 04:44:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.033 04:44:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.033 ************************************ 00:09:01.033 START TEST nvmf_target_multipath 00:09:01.033 ************************************ 00:09:01.033 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:01.294 * Looking for test storage... 00:09:01.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # lcov --version 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:01.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.294 --rc genhtml_branch_coverage=1 00:09:01.294 --rc genhtml_function_coverage=1 00:09:01.294 --rc genhtml_legend=1 00:09:01.294 --rc geninfo_all_blocks=1 00:09:01.294 --rc geninfo_unexecuted_blocks=1 00:09:01.294 00:09:01.294 ' 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:01.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.294 --rc genhtml_branch_coverage=1 00:09:01.294 --rc genhtml_function_coverage=1 00:09:01.294 --rc genhtml_legend=1 00:09:01.294 --rc geninfo_all_blocks=1 00:09:01.294 --rc geninfo_unexecuted_blocks=1 00:09:01.294 00:09:01.294 ' 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:01.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.294 --rc genhtml_branch_coverage=1 00:09:01.294 --rc genhtml_function_coverage=1 00:09:01.294 --rc genhtml_legend=1 00:09:01.294 --rc geninfo_all_blocks=1 00:09:01.294 --rc geninfo_unexecuted_blocks=1 00:09:01.294 00:09:01.294 ' 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:01.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.294 --rc genhtml_branch_coverage=1 00:09:01.294 --rc genhtml_function_coverage=1 00:09:01.294 --rc genhtml_legend=1 00:09:01.294 --rc geninfo_all_blocks=1 00:09:01.294 --rc geninfo_unexecuted_blocks=1 00:09:01.294 00:09:01.294 ' 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.294 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:01.295 04:44:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.830 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:03.831 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:03.831 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:03.831 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:03.831 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.831 04:44:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:03.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:09:03.831 00:09:03.831 --- 10.0.0.2 ping statistics --- 00:09:03.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.831 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:09:03.831 00:09:03.831 --- 10.0.0.1 ping statistics --- 00:09:03.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.831 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:03.831 only one NIC for nvmf test 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:03.831 rmmod nvme_tcp 00:09:03.831 rmmod nvme_fabrics 00:09:03.831 rmmod nvme_keyring 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:03.831 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.832 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.832 04:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.736 00:09:05.736 real 0m4.630s 00:09:05.736 user 0m0.907s 00:09:05.736 sys 0m1.666s 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:05.736 ************************************ 00:09:05.736 END TEST nvmf_target_multipath 00:09:05.736 ************************************ 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.736 ************************************ 00:09:05.736 START TEST nvmf_zcopy 00:09:05.736 ************************************ 00:09:05.736 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:05.995 * Looking for test storage... 00:09:05.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1689 -- # lcov --version 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:05.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.995 --rc genhtml_branch_coverage=1 00:09:05.995 --rc genhtml_function_coverage=1 00:09:05.995 --rc genhtml_legend=1 00:09:05.995 --rc geninfo_all_blocks=1 00:09:05.995 --rc geninfo_unexecuted_blocks=1 00:09:05.995 00:09:05.995 ' 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:05.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.995 --rc genhtml_branch_coverage=1 00:09:05.995 --rc genhtml_function_coverage=1 00:09:05.995 --rc genhtml_legend=1 00:09:05.995 --rc geninfo_all_blocks=1 00:09:05.995 --rc geninfo_unexecuted_blocks=1 00:09:05.995 00:09:05.995 ' 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:05.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.995 --rc genhtml_branch_coverage=1 00:09:05.995 --rc genhtml_function_coverage=1 00:09:05.995 --rc genhtml_legend=1 00:09:05.995 --rc geninfo_all_blocks=1 00:09:05.995 --rc geninfo_unexecuted_blocks=1 00:09:05.995 00:09:05.995 ' 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:05.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.995 --rc genhtml_branch_coverage=1 00:09:05.995 --rc genhtml_function_coverage=1 00:09:05.995 --rc genhtml_legend=1 00:09:05.995 --rc geninfo_all_blocks=1 00:09:05.995 --rc geninfo_unexecuted_blocks=1 00:09:05.995 00:09:05.995 ' 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:05.995 04:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.527 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.527 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:08.527 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:08.527 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:08.527 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:08.527 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:08.527 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:08.527 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:08.527 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:08.527 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:08.527 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:08.527 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:08.527 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:08.528 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:08.528 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:08.528 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:08.528 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:08.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:09:08.528 00:09:08.528 --- 10.0.0.2 ping statistics --- 00:09:08.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.528 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:09:08.528 00:09:08.528 --- 10.0.0.1 ping statistics --- 00:09:08.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.528 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=2223947 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 2223947 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2223947 ']' 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.528 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.529 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.529 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.529 04:44:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.529 [2024-10-28 04:44:58.803453] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:09:08.529 [2024-10-28 04:44:58.803521] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.529 [2024-10-28 04:44:58.940197] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:08.529 [2024-10-28 04:44:58.981066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.529 [2024-10-28 04:44:59.028472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.529 [2024-10-28 04:44:59.028541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.529 [2024-10-28 04:44:59.028558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.529 [2024-10-28 04:44:59.028572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.529 [2024-10-28 04:44:59.028583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.529 [2024-10-28 04:44:59.029269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.516 [2024-10-28 04:44:59.871378] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.516 [2024-10-28 04:44:59.887560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.516 malloc0 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:09.516 { 00:09:09.516 "params": { 00:09:09.516 "name": "Nvme$subsystem", 00:09:09.516 "trtype": "$TEST_TRANSPORT", 00:09:09.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.516 "adrfam": "ipv4", 00:09:09.516 "trsvcid": "$NVMF_PORT", 00:09:09.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.516 "hdgst": ${hdgst:-false}, 00:09:09.516 "ddgst": ${ddgst:-false} 00:09:09.516 }, 00:09:09.516 "method": "bdev_nvme_attach_controller" 00:09:09.516 } 00:09:09.516 EOF 00:09:09.516 )") 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:09.516 04:44:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:09.516 "params": { 00:09:09.516 "name": "Nvme1", 00:09:09.516 "trtype": "tcp", 00:09:09.516 "traddr": "10.0.0.2", 00:09:09.516 "adrfam": "ipv4", 00:09:09.516 "trsvcid": "4420", 00:09:09.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.516 "hdgst": false, 00:09:09.516 "ddgst": false 00:09:09.516 }, 00:09:09.516 "method": "bdev_nvme_attach_controller" 00:09:09.516 }' 00:09:09.516 [2024-10-28 04:44:59.975614] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:09:09.516 [2024-10-28 04:44:59.975724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224101 ] 00:09:09.775 [2024-10-28 04:45:00.114953] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:09.775 [2024-10-28 04:45:00.161706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.775 [2024-10-28 04:45:00.213732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.033 Running I/O for 10 seconds... 00:09:12.343 5639.00 IOPS, 44.05 MiB/s [2024-10-28T03:45:03.876Z] 5700.50 IOPS, 44.54 MiB/s [2024-10-28T03:45:04.811Z] 5698.33 IOPS, 44.52 MiB/s [2024-10-28T03:45:05.746Z] 5693.25 IOPS, 44.48 MiB/s [2024-10-28T03:45:06.681Z] 5692.80 IOPS, 44.48 MiB/s [2024-10-28T03:45:07.615Z] 5708.00 IOPS, 44.59 MiB/s [2024-10-28T03:45:08.550Z] 5720.86 IOPS, 44.69 MiB/s [2024-10-28T03:45:09.925Z] 5723.00 IOPS, 44.71 MiB/s [2024-10-28T03:45:10.857Z] 5724.67 IOPS, 44.72 MiB/s [2024-10-28T03:45:10.857Z] 5732.90 IOPS, 44.79 MiB/s 00:09:20.261 Latency(us) 00:09:20.261 [2024-10-28T03:45:10.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.261 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:20.261 Verification LBA range: start 0x0 length 0x1000 00:09:20.261 Nvme1n1 : 10.02 5733.88 44.80 0.00 0.00 22260.38 2177.64 32311.79 00:09:20.261 [2024-10-28T03:45:10.857Z] =================================================================================================================== 00:09:20.261 [2024-10-28T03:45:10.857Z] Total : 5733.88 44.80 0.00 0.00 22260.38 2177.64 32311.79 00:09:20.261 04:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2225386 00:09:20.261 04:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:20.261 04:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.261 04:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:20.261 04:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:20.261 04:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:20.261 04:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:20.261 04:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:20.261 04:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:20.261 { 00:09:20.261 "params": { 00:09:20.261 "name": "Nvme$subsystem", 00:09:20.261 "trtype": "$TEST_TRANSPORT", 00:09:20.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:20.261 "adrfam": "ipv4", 00:09:20.261 "trsvcid": "$NVMF_PORT", 00:09:20.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:20.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:20.261 "hdgst": ${hdgst:-false}, 00:09:20.261 "ddgst": ${ddgst:-false} 00:09:20.261 }, 00:09:20.261 "method": "bdev_nvme_attach_controller" 00:09:20.261 } 00:09:20.261 EOF 00:09:20.261 )") 00:09:20.261 [2024-10-28 04:45:10.751956] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.261 [2024-10-28 04:45:10.751997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.261 04:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:20.261 04:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:20.261 04:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:20.261 04:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:20.261 "params": { 00:09:20.261 "name": "Nvme1", 00:09:20.261 "trtype": "tcp", 00:09:20.261 "traddr": "10.0.0.2", 00:09:20.261 "adrfam": "ipv4", 00:09:20.261 "trsvcid": "4420", 00:09:20.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:20.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:20.261 "hdgst": false, 00:09:20.261 "ddgst": false 00:09:20.261 }, 00:09:20.261 "method": "bdev_nvme_attach_controller" 00:09:20.261 }' 00:09:20.261 [2024-10-28 04:45:10.759886] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.261 [2024-10-28 04:45:10.759926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.261 [2024-10-28 04:45:10.767884] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.261 [2024-10-28 04:45:10.767907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.261 [2024-10-28 04:45:10.775886] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.261 [2024-10-28 04:45:10.775908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.261 [2024-10-28 04:45:10.783889] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.261 [2024-10-28 04:45:10.783926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.261 [2024-10-28 04:45:10.791890] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.261 [2024-10-28 04:45:10.791926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.261 [2024-10-28 04:45:10.799893] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.261 [2024-10-28 04:45:10.799930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.261 [2024-10-28 04:45:10.800061] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:09:20.261 [2024-10-28 04:45:10.800141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225386 ] 00:09:20.261 [2024-10-28 04:45:10.807895] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.261 [2024-10-28 04:45:10.807931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.261 [2024-10-28 04:45:10.815899] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.261 [2024-10-28 04:45:10.815936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.261 [2024-10-28 04:45:10.823900] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.261 [2024-10-28 04:45:10.823937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.261 [2024-10-28 04:45:10.831900] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.261 [2024-10-28 04:45:10.831938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.261 [2024-10-28 04:45:10.839905] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.261 [2024-10-28 04:45:10.839942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.261 [2024-10-28 04:45:10.847907] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.261 [2024-10-28 04:45:10.847942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.855911] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.855934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.863926] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.863948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.871913] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.871949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.879916] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.879950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.887931] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.887951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.895933] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.895954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.903940] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.903961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.911936] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.911962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.919939] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.919965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.927940] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.927965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.935944] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.935979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.938620] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:20.519 [2024-10-28 04:45:10.943967] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.943992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.951957] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.951995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.959965] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.959990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.967963] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.968002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.975967] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.976006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.980899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.519 [2024-10-28 04:45:10.983973] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.983999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.992012] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:10.992051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:10.999983] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:11.000030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:11.007972] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:11.008012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:11.015987] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:11.016014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:11.023984] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:11.024022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:11.032005] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:11.032031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:11.032385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.519 [2024-10-28 04:45:11.039982] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:11.040022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:11.048012] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:11.048044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:11.056027] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:11.056065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:11.064023] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:11.064060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:11.072037] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:11.072075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:11.080039] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:11.080080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:11.088022] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:11.088057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:11.096040] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:11.096080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:11.104023] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:11.104049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.519 [2024-10-28 04:45:11.112059] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.519 [2024-10-28 04:45:11.112097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.120079] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.120120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.128064] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.128105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.136036] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.136061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.144050] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.144076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.152048] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.152075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.160070] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.160115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.168069] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.168097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.176070] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.176099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.184081] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.184112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.192076] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.192105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.200084] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.200113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.208066] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.208092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.216093] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.216125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.224114] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.224140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 Running I/O for 5 seconds... 00:09:20.777 [2024-10-28 04:45:11.232090] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.232116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.246512] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.246551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.258732] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.258762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.270568] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.270599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.284242] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.284273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.295726] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.295755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.307602] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.307642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.319371] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.319402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.330967] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.330999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.342205] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.342236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.353417] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.353448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.777 [2024-10-28 04:45:11.367004] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.777 [2024-10-28 04:45:11.367035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.034 [2024-10-28 04:45:11.378333] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.378365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.389887] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.389930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.401747] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.401775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.413388] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.413419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.427195] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.427227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.438387] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.438428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.449994] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.450026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.461815] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.461845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.473263] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.473295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.484642] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.484690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.496423] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.496454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.507688] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.507717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.519200] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.519231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.530791] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.530820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.542491] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.542523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.553993] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.554025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.565563] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.565595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.577239] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.577270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.589176] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.589207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.600841] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.600869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.614301] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.614332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.035 [2024-10-28 04:45:11.625689] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.035 [2024-10-28 04:45:11.625718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.636794] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.636823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.648359] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.648391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.659545] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.659589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.670828] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.670857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.682540] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.682572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.693855] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.693884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.705236] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.705267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.717269] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.717300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.728731] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.728760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.740603] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.740642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.753993] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.754025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.764943] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.764988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.776773] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.776803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.788532] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.788565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.802255] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.802287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.813352] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.813384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.824284] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.824315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.835453] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.835484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.847122] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.847153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.858813] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.858842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.870246] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.870277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.292 [2024-10-28 04:45:11.882077] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.292 [2024-10-28 04:45:11.882116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:11.893844] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:11.893873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:11.905404] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:11.905434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:11.916840] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:11.916869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:11.927643] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:11.927671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:11.938562] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:11.938590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:11.948537] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:11.948566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:11.959005] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:11.959033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:11.969513] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:11.969541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:11.979932] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:11.979959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:11.990249] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:11.990278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:12.000496] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:12.000524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:12.010914] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:12.010942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:12.021254] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:12.021281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:12.031928] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:12.031957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:12.042137] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:12.042165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:12.052854] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:12.052893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:12.063016] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:12.063044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:12.073233] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:12.073262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:12.083533] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:12.083568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:12.093935] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:12.093963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:12.104282] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:12.104310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:12.115117] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:12.115145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:12.126158] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:12.126186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.550 [2024-10-28 04:45:12.136708] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.550 [2024-10-28 04:45:12.136736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.807 [2024-10-28 04:45:12.148872] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.807 [2024-10-28 04:45:12.148900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.807 [2024-10-28 04:45:12.158543] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.807 [2024-10-28 04:45:12.158572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.807 [2024-10-28 04:45:12.169750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.807 [2024-10-28 04:45:12.169779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.807 [2024-10-28 04:45:12.180995] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.807 [2024-10-28 04:45:12.181024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.807 [2024-10-28 04:45:12.193251] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.807 [2024-10-28 04:45:12.193284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.807 [2024-10-28 04:45:12.205488] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.807 [2024-10-28 04:45:12.205520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.807 [2024-10-28 04:45:12.217467] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.807 [2024-10-28 04:45:12.217498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.807 11123.00 IOPS, 86.90 MiB/s [2024-10-28T03:45:12.404Z] [2024-10-28 04:45:12.229192] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.808 [2024-10-28 04:45:12.229223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.808 [2024-10-28 04:45:12.242830] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.808 [2024-10-28 04:45:12.242859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.808 [2024-10-28 04:45:12.253901] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.808 [2024-10-28 04:45:12.253948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.808 [2024-10-28 04:45:12.265398] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.808 [2024-10-28 04:45:12.265428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.808 [2024-10-28 04:45:12.276785] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.808 [2024-10-28 04:45:12.276813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.808 [2024-10-28 04:45:12.287853] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.808 [2024-10-28 04:45:12.287882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.808 [2024-10-28 04:45:12.301165] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.808 [2024-10-28 04:45:12.301197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.808 [2024-10-28 04:45:12.312445] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.808 [2024-10-28 04:45:12.312477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.808 [2024-10-28 04:45:12.324654] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.808 [2024-10-28 04:45:12.324700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.808 [2024-10-28 04:45:12.336473] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.808 [2024-10-28 04:45:12.336505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.808 [2024-10-28 04:45:12.349963] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.808 [2024-10-28 04:45:12.349994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.808 [2024-10-28 04:45:12.360964] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.808 [2024-10-28 04:45:12.360996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.808 [2024-10-28 04:45:12.372765] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.808 [2024-10-28 04:45:12.372795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.808 [2024-10-28 04:45:12.384342] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.808 [2024-10-28 04:45:12.384372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.808 [2024-10-28 04:45:12.396374] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.808 [2024-10-28 04:45:12.396406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.408141] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.408172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.419824] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.419853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.433331] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.433363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.444835] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.444865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.456753] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.456781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.468441] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.468472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.480591] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.480622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.494299] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.494330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.505359] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.505390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.517248] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.517280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.528931] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.528976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.540401] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.540432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.551811] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.551839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.563700] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.563728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.575265] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.575296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.587032] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.587063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.598564] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.598596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.610542] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.610574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.623698] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.623727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.633993] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.634025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.645740] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.645768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.065 [2024-10-28 04:45:12.657418] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.065 [2024-10-28 04:45:12.657450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.670610] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.670652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.680816] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.680844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.693090] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.693122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.704622] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.704661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.716312] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.716345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.727749] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.727777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.739124] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.739165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.751201] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.751231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.764542] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.764572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.775352] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.775383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.786554] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.786586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.797938] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.797984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.809626] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.809681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.820826] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.820854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.832851] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.832880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.844270] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.844302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.855819] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.855848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.867700] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.867728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.878835] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.878863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.890277] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.890307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.903657] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.903701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.322 [2024-10-28 04:45:12.914531] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.322 [2024-10-28 04:45:12.914562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:12.926045] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:12.926076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:12.937549] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:12.937580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:12.948693] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:12.948722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:12.959853] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:12.959889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:12.971239] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:12.971270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:12.982855] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:12.982883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:12.994572] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:12.994604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:13.006076] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:13.006107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:13.017463] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:13.017494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:13.028565] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:13.028596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:13.040237] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:13.040269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:13.051551] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:13.051582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:13.065047] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:13.065078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:13.075885] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:13.075914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:13.087413] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:13.087444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:13.099233] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:13.099264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-28 04:45:13.110478] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-28 04:45:13.110509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-28 04:45:13.122134] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-28 04:45:13.122165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-28 04:45:13.133202] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-28 04:45:13.133233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-28 04:45:13.144338] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-28 04:45:13.144370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-28 04:45:13.157834] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-28 04:45:13.157863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-28 04:45:13.168832] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-28 04:45:13.168861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.838 [2024-10-28 04:45:13.180325] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.838 [2024-10-28 04:45:13.180364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.838 [2024-10-28 04:45:13.191336] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.838 [2024-10-28 04:45:13.191365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.838 [2024-10-28 04:45:13.203233] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.838 [2024-10-28 04:45:13.203267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.838 [2024-10-28 04:45:13.215144] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.838 [2024-10-28 04:45:13.215177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.838 [2024-10-28 04:45:13.226575] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.838 [2024-10-28 04:45:13.226606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.838 11053.00 IOPS, 86.35 MiB/s [2024-10-28T03:45:13.434Z] [2024-10-28 04:45:13.238138] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.838 [2024-10-28 04:45:13.238169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.838 [2024-10-28 04:45:13.249949] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.838 [2024-10-28 04:45:13.249980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.838 [2024-10-28 04:45:13.261900] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.838 [2024-10-28 04:45:13.261945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.838 [2024-10-28 04:45:13.273525] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.838 [2024-10-28 04:45:13.273556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.838 [2024-10-28 04:45:13.285334] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.838 [2024-10-28 04:45:13.285366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.838 [2024-10-28 04:45:13.297053] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.838 [2024-10-28 04:45:13.297085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.838 [2024-10-28 04:45:13.308621] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.838 [2024-10-28 04:45:13.308665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.838 [2024-10-28 04:45:13.320508] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.838 [2024-10-28 04:45:13.320539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.838 [2024-10-28 04:45:13.332171] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.839 [2024-10-28 04:45:13.332202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.839 [2024-10-28 04:45:13.345803] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.839 [2024-10-28 04:45:13.345831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.839 [2024-10-28 04:45:13.357224] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.839 [2024-10-28 04:45:13.357256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.839 [2024-10-28 04:45:13.369079] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.839 [2024-10-28 04:45:13.369109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.839 [2024-10-28 04:45:13.381082] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.839 [2024-10-28 04:45:13.381114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.839 [2024-10-28 04:45:13.392860] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.839 [2024-10-28 04:45:13.392888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.839 [2024-10-28 04:45:13.404444] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.839 [2024-10-28 04:45:13.404483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.839 [2024-10-28 04:45:13.417611] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.839 [2024-10-28 04:45:13.417651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.839 [2024-10-28 04:45:13.428321] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.839 [2024-10-28 04:45:13.428352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.439846] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.439876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.451805] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.451834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.463523] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.463555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.476889] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.476919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.487536] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.487567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.499244] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.499275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.511141] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.511173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.524507] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.524539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.535631] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.535686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.547245] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.547275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.558738] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.558766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.570296] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.570327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.582494] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.582526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.594058] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.594091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.605610] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.605651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.617390] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.617421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.629314] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.629345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.643097] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.643128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.654373] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.654403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.666192] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.666224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.677483] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.677513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.097 [2024-10-28 04:45:13.688984] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.097 [2024-10-28 04:45:13.689015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.355 [2024-10-28 04:45:13.700729] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.355 [2024-10-28 04:45:13.700757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.355 [2024-10-28 04:45:13.712217] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.355 [2024-10-28 04:45:13.712248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.724094] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.724126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.736191] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.736222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.749644] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.749687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.760176] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.760207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.771758] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.771786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.783530] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.783561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.795105] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.795136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.806388] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.806420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.818039] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.818070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.830296] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.830328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.842254] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.842285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.853979] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.854010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.865744] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.865772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.877446] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.877477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.889236] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.889267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.900996] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.901027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.912568] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.912599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.924736] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.924764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.936260] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.936291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.356 [2024-10-28 04:45:13.947724] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.356 [2024-10-28 04:45:13.947752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.614 [2024-10-28 04:45:13.959320] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.614 [2024-10-28 04:45:13.959352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.614 [2024-10-28 04:45:13.970785] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.614 [2024-10-28 04:45:13.970813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.614 [2024-10-28 04:45:13.982110] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:13.982140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:13.993852] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:13.993880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.005693] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.005722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.017381] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.017412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.029032] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.029063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.040884] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.040912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.052882] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.052911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.065176] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.065207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.076996] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.077027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.088878] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.088906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.102363] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.102396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.113040] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.113071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.124581] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.124613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.136215] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.136245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.147865] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.147893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.159386] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.159417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.170830] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.170858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.181880] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.181909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.193499] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.193531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.615 [2024-10-28 04:45:14.204904] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.615 [2024-10-28 04:45:14.204947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.874 [2024-10-28 04:45:14.216915] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.874 [2024-10-28 04:45:14.216961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.874 [2024-10-28 04:45:14.228612] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.874 [2024-10-28 04:45:14.228652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.874 10994.67 IOPS, 85.90 MiB/s [2024-10-28T03:45:14.470Z] [2024-10-28 04:45:14.240381] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.874 [2024-10-28 04:45:14.240412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.874 [2024-10-28 04:45:14.253806] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.874 [2024-10-28 04:45:14.253835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.874 [2024-10-28 04:45:14.264589] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.874 [2024-10-28 04:45:14.264621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.874 [2024-10-28 04:45:14.276108] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.874 [2024-10-28 04:45:14.276139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.874 [2024-10-28 04:45:14.287212] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.874 [2024-10-28 04:45:14.287251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.874 [2024-10-28 04:45:14.298916] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.874 [2024-10-28 04:45:14.298945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.874 [2024-10-28 04:45:14.310583] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.874 [2024-10-28 04:45:14.310614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.874 [2024-10-28 04:45:14.322594] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.874 [2024-10-28 04:45:14.322626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.874 [2024-10-28 04:45:14.336208] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.874 [2024-10-28 04:45:14.336239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.874 [2024-10-28 04:45:14.347246] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.875 [2024-10-28 04:45:14.347277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.875 [2024-10-28 04:45:14.358740] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.875 [2024-10-28 04:45:14.358769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.875 [2024-10-28 04:45:14.370259] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.875 [2024-10-28 04:45:14.370290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.875 [2024-10-28 04:45:14.382047] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.875 [2024-10-28 04:45:14.382078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.875 [2024-10-28 04:45:14.393644] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.875 [2024-10-28 04:45:14.393690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.875 [2024-10-28 04:45:14.405146] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.875 [2024-10-28 04:45:14.405180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.875 [2024-10-28 04:45:14.417045] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.875 [2024-10-28 04:45:14.417076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.875 [2024-10-28 04:45:14.428688] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.875 [2024-10-28 04:45:14.428719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.875 [2024-10-28 04:45:14.440343] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.875 [2024-10-28 04:45:14.440375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.875 [2024-10-28 04:45:14.453836] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.875 [2024-10-28 04:45:14.453865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.875 [2024-10-28 04:45:14.464572] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.875 [2024-10-28 04:45:14.464604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.133 [2024-10-28 04:45:14.476811] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.133 [2024-10-28 04:45:14.476841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.133 [2024-10-28 04:45:14.488363] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.133 [2024-10-28 04:45:14.488394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.133 [2024-10-28 04:45:14.499966] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.133 [2024-10-28 04:45:14.499997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.133 [2024-10-28 04:45:14.511887] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.133 [2024-10-28 04:45:14.511922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.523396] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.523426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.534629] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.534686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.545988] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.546020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.557457] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.557488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.569095] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.569126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.580692] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.580720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.592197] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.592228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.603867] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.603895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.615243] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.615274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.626621] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.626665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.637840] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.637868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.651621] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.651660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.662191] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.662221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.674454] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.674486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.686543] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.686574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.698514] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.698545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.710094] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.710126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.134 [2024-10-28 04:45:14.721601] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.134 [2024-10-28 04:45:14.721641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.392 [2024-10-28 04:45:14.733473] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.392 [2024-10-28 04:45:14.733515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.392 [2024-10-28 04:45:14.745349] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.392 [2024-10-28 04:45:14.745381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.392 [2024-10-28 04:45:14.757294] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.392 [2024-10-28 04:45:14.757325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.392 [2024-10-28 04:45:14.768815] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.392 [2024-10-28 04:45:14.768843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.392 [2024-10-28 04:45:14.780689] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.392 [2024-10-28 04:45:14.780717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.392 [2024-10-28 04:45:14.792600] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.392 [2024-10-28 04:45:14.792631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.392 [2024-10-28 04:45:14.804341] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.392 [2024-10-28 04:45:14.804372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.392 [2024-10-28 04:45:14.817523] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.392 [2024-10-28 04:45:14.817555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.392 [2024-10-28 04:45:14.828394] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.392 [2024-10-28 04:45:14.828425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.392 [2024-10-28 04:45:14.839964] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.392 [2024-10-28 04:45:14.839996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.392 [2024-10-28 04:45:14.851441] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.392 [2024-10-28 04:45:14.851472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.392 [2024-10-28 04:45:14.863091] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.392 [2024-10-28 04:45:14.863122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.392 [2024-10-28 04:45:14.875371] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.393 [2024-10-28 04:45:14.875402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.393 [2024-10-28 04:45:14.887944] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.393 [2024-10-28 04:45:14.887991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.393 [2024-10-28 04:45:14.899340] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.393 [2024-10-28 04:45:14.899372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.393 [2024-10-28 04:45:14.911269] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.393 [2024-10-28 04:45:14.911300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.393 [2024-10-28 04:45:14.922738] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.393 [2024-10-28 04:45:14.922766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.393 [2024-10-28 04:45:14.934479] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.393 [2024-10-28 04:45:14.934510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.393 [2024-10-28 04:45:14.946284] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.393 [2024-10-28 04:45:14.946315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.393 [2024-10-28 04:45:14.958000] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.393 [2024-10-28 04:45:14.958032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.393 [2024-10-28 04:45:14.969490] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.393 [2024-10-28 04:45:14.969521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.393 [2024-10-28 04:45:14.981127] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.393 [2024-10-28 04:45:14.981159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:14.992836] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.651 [2024-10-28 04:45:14.992864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:15.004842] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.651 [2024-10-28 04:45:15.004871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:15.016771] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.651 [2024-10-28 04:45:15.016799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:15.028353] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.651 [2024-10-28 04:45:15.028384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:15.039942] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.651 [2024-10-28 04:45:15.039973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:15.051201] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.651 [2024-10-28 04:45:15.051233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:15.062791] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.651 [2024-10-28 04:45:15.062819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:15.074722] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.651 [2024-10-28 04:45:15.074760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:15.086384] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.651 [2024-10-28 04:45:15.086415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:15.097454] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.651 [2024-10-28 04:45:15.097485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:15.109455] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.651 [2024-10-28 04:45:15.109486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:15.121025] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.651 [2024-10-28 04:45:15.121056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:15.132347] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.651 [2024-10-28 04:45:15.132379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:15.144104] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.651 [2024-10-28 04:45:15.144136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:15.155842] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.651 [2024-10-28 04:45:15.155870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:15.167630] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.651 [2024-10-28 04:45:15.167668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.651 [2024-10-28 04:45:15.179660] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.652 [2024-10-28 04:45:15.179706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.652 [2024-10-28 04:45:15.191079] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.652 [2024-10-28 04:45:15.191111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.652 [2024-10-28 04:45:15.202660] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.652 [2024-10-28 04:45:15.202704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.652 [2024-10-28 04:45:15.214442] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.652 [2024-10-28 04:45:15.214473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.652 [2024-10-28 04:45:15.227775] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.652 [2024-10-28 04:45:15.227803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.652 10970.00 IOPS, 85.70 MiB/s [2024-10-28T03:45:15.248Z] [2024-10-28 04:45:15.239068] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.652 [2024-10-28 04:45:15.239099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.910 [2024-10-28 04:45:15.250837] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.910 [2024-10-28 04:45:15.250866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.910 [2024-10-28 04:45:15.262508] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.910 [2024-10-28 04:45:15.262540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.910 [2024-10-28 04:45:15.275760] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.910 [2024-10-28 04:45:15.275789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.910 [2024-10-28 04:45:15.286756] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.910 [2024-10-28 04:45:15.286784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.910 [2024-10-28 04:45:15.298033] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.910 [2024-10-28 04:45:15.298064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.910 [2024-10-28 04:45:15.311178] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.910 [2024-10-28 04:45:15.311210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.910 [2024-10-28 04:45:15.321072] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.910 [2024-10-28 04:45:15.321103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.910 [2024-10-28 04:45:15.332978] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.910 [2024-10-28 04:45:15.333010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.910 [2024-10-28 04:45:15.344284] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.910 [2024-10-28 04:45:15.344315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.910 [2024-10-28 04:45:15.355908] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.910 [2024-10-28 04:45:15.355936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.910 [2024-10-28 04:45:15.367731] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.910 [2024-10-28 04:45:15.367760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.910 [2024-10-28 04:45:15.380784] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.910 [2024-10-28 04:45:15.380813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.910 [2024-10-28 04:45:15.390462] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.910 [2024-10-28 04:45:15.390493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.910 [2024-10-28 04:45:15.402772] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.910 [2024-10-28 04:45:15.402800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.910 [2024-10-28 04:45:15.414420] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.910 [2024-10-28 04:45:15.414451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.910 [2024-10-28 04:45:15.425719] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.911 [2024-10-28 04:45:15.425748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.911 [2024-10-28 04:45:15.437753] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.911 [2024-10-28 04:45:15.437782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.911 [2024-10-28 04:45:15.449408] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.911 [2024-10-28 04:45:15.449439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.911 [2024-10-28 04:45:15.460901] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.911 [2024-10-28 04:45:15.460935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.911 [2024-10-28 04:45:15.472002] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.911 [2024-10-28 04:45:15.472034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.911 [2024-10-28 04:45:15.483733] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.911 [2024-10-28 04:45:15.483762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.911 [2024-10-28 04:45:15.495793] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.911 [2024-10-28 04:45:15.495822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.507103] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.507135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.518724] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.518753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.532487] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.532518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.544107] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.544138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.555318] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.555349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.567069] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.567100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.578651] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.578706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.590287] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.590318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.601882] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.601910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.613160] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.613199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.624881] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.624910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.636368] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.636400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.647870] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.647899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.659008] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.659039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.672216] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.672248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.682465] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.682496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.694707] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.694735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.706860] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.706896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.718324] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.718356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.729956] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.729988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.742040] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.742071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.170 [2024-10-28 04:45:15.753613] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.170 [2024-10-28 04:45:15.753654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.428 [2024-10-28 04:45:15.767299] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.428 [2024-10-28 04:45:15.767327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.428 [2024-10-28 04:45:15.777770] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.428 [2024-10-28 04:45:15.777798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.428 [2024-10-28 04:45:15.788825] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.428 [2024-10-28 04:45:15.788854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.428 [2024-10-28 04:45:15.800277] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.428 [2024-10-28 04:45:15.800305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.428 [2024-10-28 04:45:15.811220] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.428 [2024-10-28 04:45:15.811249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.428 [2024-10-28 04:45:15.824064] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.428 [2024-10-28 04:45:15.824092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:15.834230] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:15.834265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:15.845253] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:15.845281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:15.858092] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:15.858120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:15.868091] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:15.868118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:15.879488] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:15.879515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:15.892496] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:15.892523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:15.903181] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:15.903208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:15.913831] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:15.913860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:15.925175] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:15.925202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:15.936258] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:15.936286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:15.949066] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:15.949093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:15.960896] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:15.960924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:15.970329] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:15.970357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:15.981991] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:15.982019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:15.992867] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:15.992895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:16.003851] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:16.003880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.429 [2024-10-28 04:45:16.015001] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.429 [2024-10-28 04:45:16.015030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.026807] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.026836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.038868] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.038896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.050632] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.050697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.062337] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.062369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.073911] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.073955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.085802] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.085831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.097783] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.097813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.109536] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.109567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.120880] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.120909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.132645] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.132691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.144267] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.144298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.157870] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.157899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.169068] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.169099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.180778] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.180807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.192353] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.192384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.203837] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.203866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.215507] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.215538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 [2024-10-28 04:45:16.227275] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.227306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 10997.60 IOPS, 85.92 MiB/s [2024-10-28T03:45:16.282Z] [2024-10-28 04:45:16.237538] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.686 [2024-10-28 04:45:16.237569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.686 00:09:25.686 Latency(us) 00:09:25.686 [2024-10-28T03:45:16.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.686 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:25.686 Nvme1n1 : 5.01 10998.99 85.93 0.00 0.00 11621.56 3941.65 25304.41 00:09:25.686 [2024-10-28T03:45:16.282Z] =================================================================================================================== 00:09:25.686 [2024-10-28T03:45:16.282Z] Total : 10998.99 85.93 0.00 0.00 11621.56 3941.65 25304.41 00:09:25.686 [2024-10-28 04:45:16.247486] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.687 [2024-10-28 04:45:16.247517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.687 [2024-10-28 04:45:16.255469] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.687 [2024-10-28 04:45:16.255496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.687 [2024-10-28 04:45:16.263525] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.687 [2024-10-28 04:45:16.263577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.687 [2024-10-28 04:45:16.271519] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.687 [2024-10-28 04:45:16.271572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.687 [2024-10-28 04:45:16.279522] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.687 [2024-10-28 04:45:16.279571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.287526] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.287575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.295520] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.295567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.303535] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.303584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.311536] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.311585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.319536] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.319584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.327545] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.327595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.335550] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.335603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.343549] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.343598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.351544] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.351593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.359547] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.359593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.367556] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.367604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.375549] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.375596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.383544] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.383586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.391527] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.391556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.399561] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.399605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.407558] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.407605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.415578] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.415628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.423533] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.423558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.431536] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.431563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 [2024-10-28 04:45:16.439534] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.945 [2024-10-28 04:45:16.439560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2225386) - No such process 00:09:25.945 04:45:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2225386 00:09:25.945 04:45:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.945 04:45:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.945 04:45:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.945 04:45:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.945 04:45:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:25.945 04:45:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.945 04:45:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.945 delay0 00:09:25.945 04:45:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.945 04:45:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:25.945 04:45:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.945 04:45:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.945 04:45:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.945 04:45:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:26.203 [2024-10-28 04:45:16.674698] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:34.313 Initializing NVMe Controllers 00:09:34.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:34.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:34.313 Initialization complete. Launching workers. 00:09:34.313 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 240, failed: 20114 00:09:34.313 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 20235, failed to submit 119 00:09:34.313 success 20143, unsuccessful 92, failed 0 00:09:34.313 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:34.313 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:34.313 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:34.313 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:34.313 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.313 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:34.313 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.313 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.313 rmmod nvme_tcp 00:09:34.313 rmmod nvme_fabrics 00:09:34.313 rmmod nvme_keyring 00:09:34.314 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.314 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:34.314 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:34.314 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 2223947 ']' 00:09:34.314 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 2223947 00:09:34.314 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2223947 ']' 00:09:34.314 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2223947 00:09:34.314 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:34.314 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.314 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2223947 00:09:34.314 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:34.314 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:34.314 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2223947' 00:09:34.314 killing process with pid 2223947 00:09:34.314 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2223947 00:09:34.314 04:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2223947 00:09:34.314 04:45:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:34.314 04:45:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:34.314 04:45:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:34.314 04:45:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:34.314 04:45:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:34.314 04:45:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:34.314 04:45:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:34.314 04:45:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.314 04:45:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:34.314 04:45:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.314 04:45:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.314 04:45:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:35.692 00:09:35.692 real 0m29.774s 00:09:35.692 user 0m42.795s 00:09:35.692 sys 0m9.398s 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.692 ************************************ 00:09:35.692 END TEST nvmf_zcopy 00:09:35.692 ************************************ 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.692 ************************************ 00:09:35.692 START TEST nvmf_nmic 00:09:35.692 ************************************ 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:35.692 * Looking for test storage... 00:09:35.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1689 -- # lcov --version 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.692 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:35.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.692 --rc genhtml_branch_coverage=1 00:09:35.692 --rc genhtml_function_coverage=1 00:09:35.692 --rc genhtml_legend=1 00:09:35.692 --rc geninfo_all_blocks=1 00:09:35.692 --rc geninfo_unexecuted_blocks=1 00:09:35.692 00:09:35.693 ' 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:35.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.693 --rc genhtml_branch_coverage=1 00:09:35.693 --rc genhtml_function_coverage=1 00:09:35.693 --rc genhtml_legend=1 00:09:35.693 --rc geninfo_all_blocks=1 00:09:35.693 --rc geninfo_unexecuted_blocks=1 00:09:35.693 00:09:35.693 ' 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:35.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.693 --rc genhtml_branch_coverage=1 00:09:35.693 --rc genhtml_function_coverage=1 00:09:35.693 --rc genhtml_legend=1 00:09:35.693 --rc geninfo_all_blocks=1 00:09:35.693 --rc geninfo_unexecuted_blocks=1 00:09:35.693 00:09:35.693 ' 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:35.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.693 --rc genhtml_branch_coverage=1 00:09:35.693 --rc genhtml_function_coverage=1 00:09:35.693 --rc genhtml_legend=1 00:09:35.693 --rc geninfo_all_blocks=1 00:09:35.693 --rc geninfo_unexecuted_blocks=1 00:09:35.693 00:09:35.693 ' 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.693 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.951 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.951 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.951 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.951 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.951 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.951 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.951 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.951 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.951 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:35.952 04:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:37.858 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:37.858 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:37.858 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:37.858 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:37.858 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:37.859 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:38.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:09:38.117 00:09:38.117 --- 10.0.0.2 ping statistics --- 00:09:38.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.117 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:09:38.117 00:09:38.117 --- 10.0.0.1 ping statistics --- 00:09:38.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.117 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=2229361 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 2229361 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2229361 ']' 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.117 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.117 [2024-10-28 04:45:28.564915] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:09:38.117 [2024-10-28 04:45:28.565009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.117 [2024-10-28 04:45:28.706224] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:38.375 [2024-10-28 04:45:28.760914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.375 [2024-10-28 04:45:28.816980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.375 [2024-10-28 04:45:28.817064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.375 [2024-10-28 04:45:28.817092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.375 [2024-10-28 04:45:28.817117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.375 [2024-10-28 04:45:28.817137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.375 [2024-10-28 04:45:28.819252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.375 [2024-10-28 04:45:28.819317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.375 [2024-10-28 04:45:28.819382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.375 [2024-10-28 04:45:28.819391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.375 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:38.375 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:38.375 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:38.375 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:38.375 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.634 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.634 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:38.634 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.634 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.634 [2024-10-28 04:45:28.983275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.634 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.634 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:38.634 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.634 04:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.634 Malloc0 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.634 [2024-10-28 04:45:29.045734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:38.634 test case1: single bdev can't be used in multiple subsystems 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.634 [2024-10-28 04:45:29.069477] bdev.c:8192:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:38.634 [2024-10-28 04:45:29.069509] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:38.634 [2024-10-28 04:45:29.069524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.634 request: 00:09:38.634 { 00:09:38.634 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:38.634 "namespace": { 00:09:38.634 "bdev_name": "Malloc0", 00:09:38.634 "no_auto_visible": false 00:09:38.634 }, 00:09:38.634 "method": "nvmf_subsystem_add_ns", 00:09:38.634 "req_id": 1 00:09:38.634 } 00:09:38.634 Got JSON-RPC error response 00:09:38.634 response: 00:09:38.634 { 00:09:38.634 "code": -32602, 00:09:38.634 "message": "Invalid parameters" 00:09:38.634 } 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:38.634 Adding namespace failed - expected result. 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:38.634 test case2: host connect to nvmf target in multiple paths 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.634 [2024-10-28 04:45:29.077578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.634 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:39.208 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:39.776 04:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:39.776 04:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:39.776 04:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:39.776 04:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:39.776 04:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:42.303 04:45:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:42.303 04:45:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:42.303 04:45:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:42.303 04:45:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:42.303 04:45:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:42.304 04:45:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:42.304 04:45:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:42.304 [global] 00:09:42.304 thread=1 00:09:42.304 invalidate=1 00:09:42.304 rw=write 00:09:42.304 time_based=1 00:09:42.304 runtime=1 00:09:42.304 ioengine=libaio 00:09:42.304 direct=1 00:09:42.304 bs=4096 00:09:42.304 iodepth=1 00:09:42.304 norandommap=0 00:09:42.304 numjobs=1 00:09:42.304 00:09:42.304 verify_dump=1 00:09:42.304 verify_backlog=512 00:09:42.304 verify_state_save=0 00:09:42.304 do_verify=1 00:09:42.304 verify=crc32c-intel 00:09:42.304 [job0] 00:09:42.304 filename=/dev/nvme0n1 00:09:42.304 Could not set queue depth (nvme0n1) 00:09:42.304 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.304 fio-3.35 00:09:42.304 Starting 1 thread 00:09:43.235 00:09:43.235 job0: (groupid=0, jobs=1): err= 0: pid=2229869: Mon Oct 28 04:45:33 2024 00:09:43.235 read: IOPS=69, BW=280KiB/s (287kB/s)(288KiB/1029msec) 00:09:43.235 slat (nsec): min=6291, max=35039, avg=15578.01, stdev=10117.86 00:09:43.235 clat (usec): min=299, max=42024, avg=12428.99, stdev=18952.07 00:09:43.235 lat (usec): min=307, max=42041, avg=12444.57, stdev=18960.29 00:09:43.235 clat percentiles (usec): 00:09:43.235 | 1.00th=[ 302], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:09:43.235 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 379], 00:09:43.235 | 70.00th=[ 420], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:43.235 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:43.235 | 99.99th=[42206] 00:09:43.235 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:09:43.235 slat (nsec): min=7777, max=54057, avg=11775.16, stdev=5102.81 00:09:43.235 clat (usec): min=214, max=298, avg=244.17, stdev=11.83 00:09:43.235 lat (usec): min=223, max=352, avg=255.95, stdev=13.32 00:09:43.235 clat percentiles (usec): 00:09:43.235 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 237], 00:09:43.235 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 243], 00:09:43.235 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 265], 00:09:43.235 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 297], 99.95th=[ 297], 00:09:43.235 | 99.99th=[ 297] 00:09:43.235 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:43.235 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:43.235 lat (usec) : 250=70.55%, 500=25.86% 00:09:43.235 lat (msec) : 50=3.60% 00:09:43.235 cpu : usr=0.78%, sys=0.58%, ctx=584, majf=0, minf=1 00:09:43.235 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.235 issued rwts: total=72,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.235 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.235 00:09:43.235 Run status group 0 (all jobs): 00:09:43.235 READ: bw=280KiB/s (287kB/s), 280KiB/s-280KiB/s (287kB/s-287kB/s), io=288KiB (295kB), run=1029-1029msec 00:09:43.235 WRITE: bw=1990KiB/s (2038kB/s), 1990KiB/s-1990KiB/s (2038kB/s-2038kB/s), io=2048KiB (2097kB), run=1029-1029msec 00:09:43.235 00:09:43.235 Disk stats (read/write): 00:09:43.235 nvme0n1: ios=118/512, merge=0/0, ticks=757/122, in_queue=879, util=91.68% 00:09:43.235 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:43.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:43.235 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:43.235 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:43.235 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:43.235 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.235 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:43.235 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.235 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:43.235 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:43.235 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:43.235 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:43.235 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:43.235 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:43.235 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:43.235 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.235 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:43.235 rmmod nvme_tcp 00:09:43.235 rmmod nvme_fabrics 00:09:43.493 rmmod nvme_keyring 00:09:43.493 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:43.493 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:43.493 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:43.493 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 2229361 ']' 00:09:43.493 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 2229361 00:09:43.493 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2229361 ']' 00:09:43.493 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2229361 00:09:43.493 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:43.493 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.493 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2229361 00:09:43.493 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:43.493 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:43.493 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2229361' 00:09:43.493 killing process with pid 2229361 00:09:43.493 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2229361 00:09:43.493 04:45:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2229361 00:09:43.752 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:43.752 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:43.752 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:43.752 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:43.752 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:43.752 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:43.752 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:43.752 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:43.752 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:43.752 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.752 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.752 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.657 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:45.657 00:09:45.657 real 0m10.056s 00:09:45.657 user 0m22.231s 00:09:45.657 sys 0m2.419s 00:09:45.657 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.657 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:45.657 ************************************ 00:09:45.657 END TEST nvmf_nmic 00:09:45.657 ************************************ 00:09:45.657 04:45:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:45.657 04:45:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:45.657 04:45:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.657 04:45:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.657 ************************************ 00:09:45.657 START TEST nvmf_fio_target 00:09:45.657 ************************************ 00:09:45.657 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:45.916 * Looking for test storage... 00:09:45.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1689 -- # lcov --version 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:45.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.916 --rc genhtml_branch_coverage=1 00:09:45.916 --rc genhtml_function_coverage=1 00:09:45.916 --rc genhtml_legend=1 00:09:45.916 --rc geninfo_all_blocks=1 00:09:45.916 --rc geninfo_unexecuted_blocks=1 00:09:45.916 00:09:45.916 ' 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:45.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.916 --rc genhtml_branch_coverage=1 00:09:45.916 --rc genhtml_function_coverage=1 00:09:45.916 --rc genhtml_legend=1 00:09:45.916 --rc geninfo_all_blocks=1 00:09:45.916 --rc geninfo_unexecuted_blocks=1 00:09:45.916 00:09:45.916 ' 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:45.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.916 --rc genhtml_branch_coverage=1 00:09:45.916 --rc genhtml_function_coverage=1 00:09:45.916 --rc genhtml_legend=1 00:09:45.916 --rc geninfo_all_blocks=1 00:09:45.916 --rc geninfo_unexecuted_blocks=1 00:09:45.916 00:09:45.916 ' 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:45.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.916 --rc genhtml_branch_coverage=1 00:09:45.916 --rc genhtml_function_coverage=1 00:09:45.916 --rc genhtml_legend=1 00:09:45.916 --rc geninfo_all_blocks=1 00:09:45.916 --rc geninfo_unexecuted_blocks=1 00:09:45.916 00:09:45.916 ' 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.916 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:45.917 04:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:48.504 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:48.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:48.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:48.505 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:48.505 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:48.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:48.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:09:48.505 00:09:48.505 --- 10.0.0.2 ping statistics --- 00:09:48.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.505 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:48.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:48.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:09:48.505 00:09:48.505 --- 10.0.0.1 ping statistics --- 00:09:48.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.505 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:48.505 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.506 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=2232057 00:09:48.506 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 2232057 00:09:48.506 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:48.506 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2232057 ']' 00:09:48.506 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.506 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.506 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.506 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.506 04:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.506 [2024-10-28 04:45:38.719190] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:09:48.506 [2024-10-28 04:45:38.719271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.506 [2024-10-28 04:45:38.859497] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:48.506 [2024-10-28 04:45:38.896583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:48.506 [2024-10-28 04:45:38.943342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.506 [2024-10-28 04:45:38.943397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.506 [2024-10-28 04:45:38.943411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.506 [2024-10-28 04:45:38.943422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.506 [2024-10-28 04:45:38.943432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.506 [2024-10-28 04:45:38.944895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.506 [2024-10-28 04:45:38.944955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.506 [2024-10-28 04:45:38.945021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.506 [2024-10-28 04:45:38.945025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.438 04:45:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.438 04:45:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:49.438 04:45:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:49.438 04:45:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:49.438 04:45:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.438 04:45:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.438 04:45:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:49.695 [2024-10-28 04:45:40.095455] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.695 04:45:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.954 04:45:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:49.954 04:45:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.211 04:45:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:50.211 04:45:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.469 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:50.469 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.727 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:50.727 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:50.984 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.550 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:51.550 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.550 04:45:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:51.550 04:45:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.113 04:45:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:52.113 04:45:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:52.113 04:45:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:52.677 04:45:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:52.677 04:45:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.677 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:52.677 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:52.935 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.192 [2024-10-28 04:45:43.739755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.192 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:53.450 04:45:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:54.015 04:45:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:54.580 04:45:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:54.580 04:45:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:54.580 04:45:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:54.580 04:45:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:54.580 04:45:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:54.580 04:45:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:56.476 04:45:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:56.476 04:45:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:56.477 04:45:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:56.477 04:45:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:56.477 04:45:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:56.477 04:45:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:56.477 04:45:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:56.477 [global] 00:09:56.477 thread=1 00:09:56.477 invalidate=1 00:09:56.477 rw=write 00:09:56.477 time_based=1 00:09:56.477 runtime=1 00:09:56.477 ioengine=libaio 00:09:56.477 direct=1 00:09:56.477 bs=4096 00:09:56.477 iodepth=1 00:09:56.477 norandommap=0 00:09:56.477 numjobs=1 00:09:56.477 00:09:56.477 verify_dump=1 00:09:56.477 verify_backlog=512 00:09:56.477 verify_state_save=0 00:09:56.477 do_verify=1 00:09:56.477 verify=crc32c-intel 00:09:56.477 [job0] 00:09:56.477 filename=/dev/nvme0n1 00:09:56.477 [job1] 00:09:56.477 filename=/dev/nvme0n2 00:09:56.477 [job2] 00:09:56.477 filename=/dev/nvme0n3 00:09:56.477 [job3] 00:09:56.477 filename=/dev/nvme0n4 00:09:56.734 Could not set queue depth (nvme0n1) 00:09:56.734 Could not set queue depth (nvme0n2) 00:09:56.734 Could not set queue depth (nvme0n3) 00:09:56.734 Could not set queue depth (nvme0n4) 00:09:56.734 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.734 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.734 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.734 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.734 fio-3.35 00:09:56.734 Starting 4 threads 00:09:58.106 00:09:58.106 job0: (groupid=0, jobs=1): err= 0: pid=2233131: Mon Oct 28 04:45:48 2024 00:09:58.106 read: IOPS=1435, BW=5741KiB/s (5878kB/s)(5752KiB/1002msec) 00:09:58.106 slat (nsec): min=4631, max=64652, avg=18288.32, stdev=10850.09 00:09:58.106 clat (usec): min=256, max=41245, avg=431.46, stdev=1513.46 00:09:58.106 lat (usec): min=263, max=41258, avg=449.75, stdev=1513.18 00:09:58.106 clat percentiles (usec): 00:09:58.106 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 310], 00:09:58.106 | 30.00th=[ 326], 40.00th=[ 351], 50.00th=[ 371], 60.00th=[ 388], 00:09:58.106 | 70.00th=[ 404], 80.00th=[ 429], 90.00th=[ 465], 95.00th=[ 498], 00:09:58.106 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[40633], 99.95th=[41157], 00:09:58.106 | 99.99th=[41157] 00:09:58.106 write: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec); 0 zone resets 00:09:58.106 slat (nsec): min=6213, max=58767, avg=10593.64, stdev=6819.63 00:09:58.106 clat (usec): min=150, max=452, avg=211.76, stdev=33.30 00:09:58.106 lat (usec): min=158, max=479, avg=222.36, stdev=34.65 00:09:58.106 clat percentiles (usec): 00:09:58.106 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 190], 00:09:58.106 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:09:58.106 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 245], 95.00th=[ 281], 00:09:58.106 | 99.00th=[ 322], 99.50th=[ 347], 99.90th=[ 420], 99.95th=[ 453], 00:09:58.106 | 99.99th=[ 453] 00:09:58.106 bw ( KiB/s): min= 8175, max= 8175, per=36.90%, avg=8175.00, stdev= 0.00, samples=1 00:09:58.106 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:09:58.106 lat (usec) : 250=47.07%, 500=50.74%, 750=2.12% 00:09:58.106 lat (msec) : 50=0.07% 00:09:58.106 cpu : usr=2.00%, sys=4.80%, ctx=2975, majf=0, minf=1 00:09:58.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.107 issued rwts: total=1438,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.107 job1: (groupid=0, jobs=1): err= 0: pid=2233144: Mon Oct 28 04:45:48 2024 00:09:58.107 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:09:58.107 slat (nsec): min=8537, max=38240, avg=18198.32, stdev=9004.81 00:09:58.107 clat (usec): min=40910, max=41311, avg=40991.15, stdev=77.34 00:09:58.107 lat (usec): min=40946, max=41320, avg=41009.34, stdev=73.92 00:09:58.107 clat percentiles (usec): 00:09:58.107 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:58.107 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:58.107 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:58.107 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:58.107 | 99.99th=[41157] 00:09:58.107 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:09:58.107 slat (nsec): min=9301, max=63883, avg=14184.68, stdev=6959.11 00:09:58.107 clat (usec): min=171, max=335, avg=200.01, stdev=18.98 00:09:58.107 lat (usec): min=182, max=345, avg=214.19, stdev=22.62 00:09:58.107 clat percentiles (usec): 00:09:58.107 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 186], 00:09:58.107 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:09:58.107 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 229], 00:09:58.107 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 338], 99.95th=[ 338], 00:09:58.107 | 99.99th=[ 338] 00:09:58.107 bw ( KiB/s): min= 4087, max= 4087, per=18.45%, avg=4087.00, stdev= 0.00, samples=1 00:09:58.107 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:58.107 lat (usec) : 250=94.38%, 500=1.50% 00:09:58.107 lat (msec) : 50=4.12% 00:09:58.107 cpu : usr=0.59%, sys=0.79%, ctx=535, majf=0, minf=1 00:09:58.107 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.107 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.107 job2: (groupid=0, jobs=1): err= 0: pid=2233178: Mon Oct 28 04:45:48 2024 00:09:58.107 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:58.107 slat (nsec): min=5902, max=49069, avg=13352.84, stdev=7016.99 00:09:58.107 clat (usec): min=234, max=637, avg=364.09, stdev=79.01 00:09:58.107 lat (usec): min=240, max=646, avg=377.44, stdev=80.20 00:09:58.107 clat percentiles (usec): 00:09:58.107 | 1.00th=[ 249], 5.00th=[ 265], 10.00th=[ 285], 20.00th=[ 306], 00:09:58.107 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 359], 00:09:58.107 | 70.00th=[ 375], 80.00th=[ 408], 90.00th=[ 486], 95.00th=[ 545], 00:09:58.107 | 99.00th=[ 603], 99.50th=[ 611], 99.90th=[ 635], 99.95th=[ 635], 00:09:58.107 | 99.99th=[ 635] 00:09:58.107 write: IOPS=1755, BW=7021KiB/s (7189kB/s)(7028KiB/1001msec); 0 zone resets 00:09:58.107 slat (nsec): min=7709, max=58473, avg=12785.19, stdev=6731.56 00:09:58.107 clat (usec): min=166, max=460, avg=219.67, stdev=33.39 00:09:58.107 lat (usec): min=174, max=470, avg=232.45, stdev=35.41 00:09:58.107 clat percentiles (usec): 00:09:58.107 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 200], 00:09:58.107 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:09:58.107 | 70.00th=[ 225], 80.00th=[ 233], 90.00th=[ 253], 95.00th=[ 293], 00:09:58.107 | 99.00th=[ 355], 99.50th=[ 392], 99.90th=[ 457], 99.95th=[ 461], 00:09:58.107 | 99.99th=[ 461] 00:09:58.107 bw ( KiB/s): min= 8175, max= 8175, per=36.90%, avg=8175.00, stdev= 0.00, samples=1 00:09:58.107 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:09:58.107 lat (usec) : 250=48.34%, 500=47.59%, 750=4.07% 00:09:58.107 cpu : usr=3.70%, sys=5.20%, ctx=3295, majf=0, minf=1 00:09:58.107 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.107 issued rwts: total=1536,1757,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.107 job3: (groupid=0, jobs=1): err= 0: pid=2233190: Mon Oct 28 04:45:48 2024 00:09:58.107 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:58.107 slat (nsec): min=5576, max=37816, avg=11012.13, stdev=6226.13 00:09:58.107 clat (usec): min=274, max=41402, avg=362.38, stdev=1049.11 00:09:58.107 lat (usec): min=280, max=41411, avg=373.39, stdev=1049.21 00:09:58.107 clat percentiles (usec): 00:09:58.107 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 297], 00:09:58.107 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 334], 00:09:58.107 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 388], 95.00th=[ 461], 00:09:58.107 | 99.00th=[ 529], 99.50th=[ 562], 99.90th=[ 685], 99.95th=[41157], 00:09:58.107 | 99.99th=[41157] 00:09:58.107 write: IOPS=1809, BW=7237KiB/s (7410kB/s)(7244KiB/1001msec); 0 zone resets 00:09:58.107 slat (nsec): min=7203, max=69328, avg=12853.75, stdev=7472.15 00:09:58.107 clat (usec): min=173, max=486, avg=216.32, stdev=42.53 00:09:58.107 lat (usec): min=183, max=514, avg=229.17, stdev=46.86 00:09:58.107 clat percentiles (usec): 00:09:58.107 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:09:58.107 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 210], 00:09:58.107 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 251], 95.00th=[ 314], 00:09:58.107 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 474], 99.95th=[ 486], 00:09:58.107 | 99.99th=[ 486] 00:09:58.107 bw ( KiB/s): min= 8175, max= 8175, per=36.90%, avg=8175.00, stdev= 0.00, samples=1 00:09:58.107 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:09:58.107 lat (usec) : 250=48.52%, 500=50.46%, 750=0.99% 00:09:58.107 lat (msec) : 50=0.03% 00:09:58.107 cpu : usr=3.60%, sys=4.80%, ctx=3347, majf=0, minf=1 00:09:58.107 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.107 issued rwts: total=1536,1811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.107 00:09:58.107 Run status group 0 (all jobs): 00:09:58.107 READ: bw=17.5MiB/s (18.3MB/s), 86.8KiB/s-6138KiB/s (88.9kB/s-6285kB/s), io=17.7MiB (18.6MB), run=1001-1014msec 00:09:58.107 WRITE: bw=21.6MiB/s (22.7MB/s), 2020KiB/s-7237KiB/s (2068kB/s-7410kB/s), io=21.9MiB (23.0MB), run=1001-1014msec 00:09:58.107 00:09:58.107 Disk stats (read/write): 00:09:58.107 nvme0n1: ios=1180/1536, merge=0/0, ticks=488/306, in_queue=794, util=86.27% 00:09:58.107 nvme0n2: ios=40/512, merge=0/0, ticks=1600/95, in_queue=1695, util=88.99% 00:09:58.107 nvme0n3: ios=1286/1536, merge=0/0, ticks=1352/315, in_queue=1667, util=93.07% 00:09:58.107 nvme0n4: ios=1377/1536, merge=0/0, ticks=538/312, in_queue=850, util=95.87% 00:09:58.107 04:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:58.107 [global] 00:09:58.107 thread=1 00:09:58.107 invalidate=1 00:09:58.107 rw=randwrite 00:09:58.107 time_based=1 00:09:58.107 runtime=1 00:09:58.107 ioengine=libaio 00:09:58.107 direct=1 00:09:58.107 bs=4096 00:09:58.107 iodepth=1 00:09:58.107 norandommap=0 00:09:58.107 numjobs=1 00:09:58.107 00:09:58.107 verify_dump=1 00:09:58.107 verify_backlog=512 00:09:58.107 verify_state_save=0 00:09:58.107 do_verify=1 00:09:58.107 verify=crc32c-intel 00:09:58.107 [job0] 00:09:58.107 filename=/dev/nvme0n1 00:09:58.107 [job1] 00:09:58.107 filename=/dev/nvme0n2 00:09:58.107 [job2] 00:09:58.107 filename=/dev/nvme0n3 00:09:58.107 [job3] 00:09:58.107 filename=/dev/nvme0n4 00:09:58.107 Could not set queue depth (nvme0n1) 00:09:58.107 Could not set queue depth (nvme0n2) 00:09:58.107 Could not set queue depth (nvme0n3) 00:09:58.107 Could not set queue depth (nvme0n4) 00:09:58.364 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.364 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.364 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.364 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.364 fio-3.35 00:09:58.364 Starting 4 threads 00:09:59.734 00:09:59.734 job0: (groupid=0, jobs=1): err= 0: pid=2233471: Mon Oct 28 04:45:49 2024 00:09:59.734 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:09:59.734 slat (nsec): min=8610, max=35534, avg=27589.18, stdev=9931.59 00:09:59.734 clat (usec): min=40419, max=41039, avg=40943.97, stdev=126.69 00:09:59.734 lat (usec): min=40428, max=41057, avg=40971.56, stdev=129.69 00:09:59.735 clat percentiles (usec): 00:09:59.735 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:59.735 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:59.735 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:59.735 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:59.735 | 99.99th=[41157] 00:09:59.735 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:09:59.735 slat (nsec): min=7590, max=46419, avg=9626.65, stdev=3599.17 00:09:59.735 clat (usec): min=164, max=359, avg=192.90, stdev=14.21 00:09:59.735 lat (usec): min=171, max=405, avg=202.53, stdev=15.47 00:09:59.735 clat percentiles (usec): 00:09:59.735 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 182], 00:09:59.735 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 194], 00:09:59.735 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 215], 00:09:59.735 | 99.00th=[ 227], 99.50th=[ 231], 99.90th=[ 359], 99.95th=[ 359], 00:09:59.735 | 99.99th=[ 359] 00:09:59.735 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.735 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.735 lat (usec) : 250=95.51%, 500=0.37% 00:09:59.735 lat (msec) : 50=4.12% 00:09:59.735 cpu : usr=0.50%, sys=0.50%, ctx=534, majf=0, minf=2 00:09:59.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.735 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.735 job1: (groupid=0, jobs=1): err= 0: pid=2233472: Mon Oct 28 04:45:49 2024 00:09:59.735 read: IOPS=23, BW=92.8KiB/s (95.1kB/s)(96.0KiB/1034msec) 00:09:59.735 slat (nsec): min=6790, max=33181, avg=25260.83, stdev=9344.48 00:09:59.735 clat (usec): min=328, max=41230, avg=37594.99, stdev=11470.94 00:09:59.735 lat (usec): min=341, max=41244, avg=37620.25, stdev=11475.59 00:09:59.735 clat percentiles (usec): 00:09:59.735 | 1.00th=[ 330], 5.00th=[ 375], 10.00th=[40633], 20.00th=[41157], 00:09:59.735 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:59.735 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:59.735 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:59.735 | 99.99th=[41157] 00:09:59.735 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:09:59.735 slat (nsec): min=6179, max=50814, avg=8728.71, stdev=4318.83 00:09:59.735 clat (usec): min=179, max=410, avg=244.15, stdev=45.55 00:09:59.735 lat (usec): min=186, max=427, avg=252.87, stdev=46.42 00:09:59.735 clat percentiles (usec): 00:09:59.735 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 208], 00:09:59.735 | 30.00th=[ 217], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 245], 00:09:59.735 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 306], 95.00th=[ 347], 00:09:59.735 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 412], 99.95th=[ 412], 00:09:59.735 | 99.99th=[ 412] 00:09:59.735 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.735 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.735 lat (usec) : 250=67.54%, 500=28.36% 00:09:59.735 lat (msec) : 50=4.10% 00:09:59.735 cpu : usr=0.39%, sys=0.39%, ctx=536, majf=0, minf=1 00:09:59.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.735 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.735 job2: (groupid=0, jobs=1): err= 0: pid=2233473: Mon Oct 28 04:45:49 2024 00:09:59.735 read: IOPS=54, BW=218KiB/s (223kB/s)(224KiB/1027msec) 00:09:59.735 slat (nsec): min=6246, max=35603, avg=18779.91, stdev=12654.65 00:09:59.735 clat (usec): min=327, max=41531, avg=15658.92, stdev=19804.22 00:09:59.735 lat (usec): min=333, max=41564, avg=15677.70, stdev=19811.44 00:09:59.735 clat percentiles (usec): 00:09:59.735 | 1.00th=[ 326], 5.00th=[ 359], 10.00th=[ 371], 20.00th=[ 375], 00:09:59.735 | 30.00th=[ 404], 40.00th=[ 494], 50.00th=[ 523], 60.00th=[ 685], 00:09:59.735 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:59.735 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:59.735 | 99.99th=[41681] 00:09:59.735 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:09:59.735 slat (nsec): min=7548, max=45333, avg=10823.60, stdev=4515.70 00:09:59.735 clat (usec): min=176, max=576, avg=276.03, stdev=60.41 00:09:59.735 lat (usec): min=193, max=586, avg=286.86, stdev=60.83 00:09:59.735 clat percentiles (usec): 00:09:59.735 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 227], 00:09:59.735 | 30.00th=[ 243], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 273], 00:09:59.735 | 70.00th=[ 289], 80.00th=[ 322], 90.00th=[ 388], 95.00th=[ 396], 00:09:59.735 | 99.00th=[ 433], 99.50th=[ 474], 99.90th=[ 578], 99.95th=[ 578], 00:09:59.735 | 99.99th=[ 578] 00:09:59.735 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.735 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.735 lat (usec) : 250=31.34%, 500=62.68%, 750=2.11%, 1000=0.18% 00:09:59.735 lat (msec) : 50=3.70% 00:09:59.735 cpu : usr=0.49%, sys=0.58%, ctx=570, majf=0, minf=1 00:09:59.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.735 issued rwts: total=56,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.735 job3: (groupid=0, jobs=1): err= 0: pid=2233474: Mon Oct 28 04:45:49 2024 00:09:59.735 read: IOPS=20, BW=82.8KiB/s (84.7kB/s)(84.0KiB/1015msec) 00:09:59.735 slat (nsec): min=7135, max=35821, avg=28936.71, stdev=9367.27 00:09:59.735 clat (usec): min=40743, max=41084, avg=40952.59, stdev=66.99 00:09:59.735 lat (usec): min=40750, max=41102, avg=40981.53, stdev=68.58 00:09:59.735 clat percentiles (usec): 00:09:59.735 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:59.735 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:59.735 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:59.735 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:59.735 | 99.99th=[41157] 00:09:59.735 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:09:59.735 slat (nsec): min=6517, max=42425, avg=11303.16, stdev=4719.90 00:09:59.735 clat (usec): min=183, max=507, avg=286.40, stdev=78.69 00:09:59.735 lat (usec): min=198, max=523, avg=297.71, stdev=79.36 00:09:59.735 clat percentiles (usec): 00:09:59.735 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 223], 00:09:59.735 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 265], 00:09:59.735 | 70.00th=[ 351], 80.00th=[ 379], 90.00th=[ 396], 95.00th=[ 416], 00:09:59.735 | 99.00th=[ 474], 99.50th=[ 494], 99.90th=[ 506], 99.95th=[ 506], 00:09:59.735 | 99.99th=[ 506] 00:09:59.735 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.735 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.735 lat (usec) : 250=49.53%, 500=46.34%, 750=0.19% 00:09:59.735 lat (msec) : 50=3.94% 00:09:59.735 cpu : usr=0.39%, sys=0.39%, ctx=534, majf=0, minf=1 00:09:59.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.735 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.735 00:09:59.735 Run status group 0 (all jobs): 00:09:59.735 READ: bw=476KiB/s (487kB/s), 82.8KiB/s-218KiB/s (84.7kB/s-223kB/s), io=492KiB (504kB), run=1007-1034msec 00:09:59.735 WRITE: bw=7923KiB/s (8113kB/s), 1981KiB/s-2034KiB/s (2028kB/s-2083kB/s), io=8192KiB (8389kB), run=1007-1034msec 00:09:59.735 00:09:59.735 Disk stats (read/write): 00:09:59.735 nvme0n1: ios=68/512, merge=0/0, ticks=768/90, in_queue=858, util=87.58% 00:09:59.735 nvme0n2: ios=69/512, merge=0/0, ticks=763/123, in_queue=886, util=91.46% 00:09:59.735 nvme0n3: ios=109/512, merge=0/0, ticks=1122/136, in_queue=1258, util=98.02% 00:09:59.735 nvme0n4: ios=62/512, merge=0/0, ticks=937/148, in_queue=1085, util=96.86% 00:09:59.735 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:59.735 [global] 00:09:59.735 thread=1 00:09:59.735 invalidate=1 00:09:59.735 rw=write 00:09:59.735 time_based=1 00:09:59.735 runtime=1 00:09:59.735 ioengine=libaio 00:09:59.735 direct=1 00:09:59.735 bs=4096 00:09:59.735 iodepth=128 00:09:59.735 norandommap=0 00:09:59.735 numjobs=1 00:09:59.735 00:09:59.735 verify_dump=1 00:09:59.735 verify_backlog=512 00:09:59.735 verify_state_save=0 00:09:59.735 do_verify=1 00:09:59.735 verify=crc32c-intel 00:09:59.735 [job0] 00:09:59.735 filename=/dev/nvme0n1 00:09:59.735 [job1] 00:09:59.735 filename=/dev/nvme0n2 00:09:59.735 [job2] 00:09:59.735 filename=/dev/nvme0n3 00:09:59.735 [job3] 00:09:59.735 filename=/dev/nvme0n4 00:09:59.735 Could not set queue depth (nvme0n1) 00:09:59.735 Could not set queue depth (nvme0n2) 00:09:59.735 Could not set queue depth (nvme0n3) 00:09:59.735 Could not set queue depth (nvme0n4) 00:09:59.735 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.735 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.735 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.735 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.735 fio-3.35 00:09:59.735 Starting 4 threads 00:10:01.109 00:10:01.109 job0: (groupid=0, jobs=1): err= 0: pid=2233704: Mon Oct 28 04:45:51 2024 00:10:01.109 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:10:01.109 slat (usec): min=2, max=10214, avg=108.02, stdev=689.22 00:10:01.109 clat (usec): min=3482, max=37830, avg=14437.92, stdev=5242.66 00:10:01.109 lat (usec): min=3492, max=37842, avg=14545.94, stdev=5300.81 00:10:01.109 clat percentiles (usec): 00:10:01.109 | 1.00th=[ 4555], 5.00th=[ 8094], 10.00th=[ 9634], 20.00th=[10945], 00:10:01.109 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13042], 60.00th=[13960], 00:10:01.109 | 70.00th=[14746], 80.00th=[16909], 90.00th=[22938], 95.00th=[25297], 00:10:01.109 | 99.00th=[30802], 99.50th=[31065], 99.90th=[32375], 99.95th=[32375], 00:10:01.109 | 99.99th=[38011] 00:10:01.109 write: IOPS=4647, BW=18.2MiB/s (19.0MB/s)(18.2MiB/1005msec); 0 zone resets 00:10:01.109 slat (usec): min=3, max=18963, avg=86.19, stdev=651.38 00:10:01.109 clat (usec): min=1157, max=36847, avg=13045.27, stdev=5056.10 00:10:01.109 lat (usec): min=1178, max=36891, avg=13131.46, stdev=5100.00 00:10:01.109 clat percentiles (usec): 00:10:01.109 | 1.00th=[ 4752], 5.00th=[ 7046], 10.00th=[ 7963], 20.00th=[ 9372], 00:10:01.109 | 30.00th=[10421], 40.00th=[11207], 50.00th=[11863], 60.00th=[12518], 00:10:01.109 | 70.00th=[14353], 80.00th=[16188], 90.00th=[18220], 95.00th=[23462], 00:10:01.109 | 99.00th=[33162], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:10:01.109 | 99.99th=[36963] 00:10:01.109 bw ( KiB/s): min=18128, max=18736, per=30.30%, avg=18432.00, stdev=429.92, samples=2 00:10:01.109 iops : min= 4532, max= 4684, avg=4608.00, stdev=107.48, samples=2 00:10:01.109 lat (msec) : 2=0.02%, 4=0.45%, 10=18.08%, 20=69.08%, 50=12.36% 00:10:01.109 cpu : usr=6.77%, sys=10.76%, ctx=378, majf=0, minf=1 00:10:01.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:01.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.109 issued rwts: total=4608,4671,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.109 job1: (groupid=0, jobs=1): err= 0: pid=2233705: Mon Oct 28 04:45:51 2024 00:10:01.109 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:10:01.109 slat (usec): min=2, max=14664, avg=122.68, stdev=803.73 00:10:01.109 clat (usec): min=5115, max=54555, avg=16198.95, stdev=10237.35 00:10:01.109 lat (usec): min=5124, max=57906, avg=16321.64, stdev=10300.53 00:10:01.109 clat percentiles (usec): 00:10:01.109 | 1.00th=[ 7439], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10159], 00:10:01.109 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11863], 00:10:01.109 | 70.00th=[13698], 80.00th=[23200], 90.00th=[32113], 95.00th=[40633], 00:10:01.109 | 99.00th=[49546], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:10:01.109 | 99.99th=[54789] 00:10:01.109 write: IOPS=4154, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1002msec); 0 zone resets 00:10:01.109 slat (usec): min=3, max=13670, avg=104.52, stdev=613.62 00:10:01.109 clat (usec): min=462, max=55646, avg=14609.69, stdev=8514.04 00:10:01.109 lat (usec): min=585, max=55694, avg=14714.21, stdev=8550.79 00:10:01.109 clat percentiles (usec): 00:10:01.109 | 1.00th=[ 2540], 5.00th=[ 5145], 10.00th=[ 8160], 20.00th=[ 9503], 00:10:01.109 | 30.00th=[10552], 40.00th=[11469], 50.00th=[12256], 60.00th=[14746], 00:10:01.109 | 70.00th=[15926], 80.00th=[16712], 90.00th=[22676], 95.00th=[29230], 00:10:01.109 | 99.00th=[49021], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:10:01.109 | 99.99th=[55837] 00:10:01.109 bw ( KiB/s): min=12288, max=20521, per=26.97%, avg=16404.50, stdev=5821.61, samples=2 00:10:01.109 iops : min= 3072, max= 5130, avg=4101.00, stdev=1455.23, samples=2 00:10:01.109 lat (usec) : 500=0.01%, 750=0.02% 00:10:01.109 lat (msec) : 2=0.27%, 4=1.66%, 10=19.42%, 20=60.32%, 50=17.44% 00:10:01.109 lat (msec) : 100=0.86% 00:10:01.109 cpu : usr=6.79%, sys=9.49%, ctx=376, majf=0, minf=1 00:10:01.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:01.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.109 issued rwts: total=4096,4163,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.109 job2: (groupid=0, jobs=1): err= 0: pid=2233706: Mon Oct 28 04:45:51 2024 00:10:01.109 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:10:01.109 slat (usec): min=2, max=25503, avg=127.21, stdev=883.58 00:10:01.109 clat (usec): min=6053, max=56607, avg=18417.07, stdev=8003.94 00:10:01.109 lat (usec): min=6059, max=56616, avg=18544.28, stdev=8057.54 00:10:01.109 clat percentiles (usec): 00:10:01.109 | 1.00th=[ 9634], 5.00th=[11207], 10.00th=[11994], 20.00th=[12387], 00:10:01.109 | 30.00th=[12780], 40.00th=[13042], 50.00th=[14091], 60.00th=[19792], 00:10:01.109 | 70.00th=[21365], 80.00th=[24249], 90.00th=[28705], 95.00th=[35390], 00:10:01.109 | 99.00th=[41681], 99.50th=[51643], 99.90th=[51643], 99.95th=[54264], 00:10:01.109 | 99.99th=[56361] 00:10:01.109 write: IOPS=3628, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1006msec); 0 zone resets 00:10:01.109 slat (usec): min=3, max=26064, avg=118.95, stdev=881.46 00:10:01.109 clat (usec): min=689, max=99467, avg=15985.89, stdev=10580.99 00:10:01.109 lat (usec): min=697, max=99480, avg=16104.84, stdev=10620.75 00:10:01.109 clat percentiles (usec): 00:10:01.109 | 1.00th=[ 3064], 5.00th=[ 8717], 10.00th=[ 9896], 20.00th=[11863], 00:10:01.109 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13304], 60.00th=[14353], 00:10:01.109 | 70.00th=[15533], 80.00th=[16188], 90.00th=[26346], 95.00th=[29492], 00:10:01.109 | 99.00th=[79168], 99.50th=[91751], 99.90th=[94897], 99.95th=[99091], 00:10:01.109 | 99.99th=[99091] 00:10:01.109 bw ( KiB/s): min=12288, max=16384, per=23.57%, avg=14336.00, stdev=2896.31, samples=2 00:10:01.109 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:10:01.109 lat (usec) : 750=0.06% 00:10:01.109 lat (msec) : 2=0.17%, 4=0.33%, 10=6.17%, 20=66.93%, 50=25.12% 00:10:01.109 lat (msec) : 100=1.23% 00:10:01.109 cpu : usr=4.58%, sys=9.05%, ctx=283, majf=0, minf=2 00:10:01.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:01.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.109 issued rwts: total=3584,3650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.109 job3: (groupid=0, jobs=1): err= 0: pid=2233707: Mon Oct 28 04:45:51 2024 00:10:01.109 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:10:01.110 slat (usec): min=2, max=18752, avg=163.71, stdev=1043.89 00:10:01.110 clat (usec): min=6831, max=50825, avg=20878.12, stdev=7471.96 00:10:01.110 lat (usec): min=6843, max=50862, avg=21041.82, stdev=7556.00 00:10:01.110 clat percentiles (usec): 00:10:01.110 | 1.00th=[ 6915], 5.00th=[13829], 10.00th=[14353], 20.00th=[15008], 00:10:01.110 | 30.00th=[15664], 40.00th=[16319], 50.00th=[17433], 60.00th=[20055], 00:10:01.110 | 70.00th=[23200], 80.00th=[26870], 90.00th=[33817], 95.00th=[34341], 00:10:01.110 | 99.00th=[41681], 99.50th=[43254], 99.90th=[45351], 99.95th=[47973], 00:10:01.110 | 99.99th=[50594] 00:10:01.110 write: IOPS=2802, BW=10.9MiB/s (11.5MB/s)(11.0MiB/1004msec); 0 zone resets 00:10:01.110 slat (usec): min=3, max=27897, avg=199.56, stdev=1333.13 00:10:01.110 clat (usec): min=1697, max=72149, avg=24796.13, stdev=11541.03 00:10:01.110 lat (usec): min=5498, max=74934, avg=24995.69, stdev=11648.46 00:10:01.110 clat percentiles (usec): 00:10:01.110 | 1.00th=[ 5932], 5.00th=[11863], 10.00th=[13042], 20.00th=[14746], 00:10:01.110 | 30.00th=[18482], 40.00th=[20317], 50.00th=[23200], 60.00th=[25560], 00:10:01.110 | 70.00th=[28443], 80.00th=[31589], 90.00th=[35390], 95.00th=[44827], 00:10:01.110 | 99.00th=[71828], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:10:01.110 | 99.99th=[71828] 00:10:01.110 bw ( KiB/s): min=10032, max=11456, per=17.66%, avg=10744.00, stdev=1006.92, samples=2 00:10:01.110 iops : min= 2508, max= 2864, avg=2686.00, stdev=251.73, samples=2 00:10:01.110 lat (msec) : 2=0.02%, 10=2.36%, 20=45.94%, 50=49.80%, 100=1.88% 00:10:01.110 cpu : usr=2.89%, sys=3.89%, ctx=301, majf=0, minf=1 00:10:01.110 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:01.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.110 issued rwts: total=2560,2814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.110 00:10:01.110 Run status group 0 (all jobs): 00:10:01.110 READ: bw=57.7MiB/s (60.5MB/s), 9.96MiB/s-17.9MiB/s (10.4MB/s-18.8MB/s), io=58.0MiB (60.8MB), run=1002-1006msec 00:10:01.110 WRITE: bw=59.4MiB/s (62.3MB/s), 10.9MiB/s-18.2MiB/s (11.5MB/s-19.0MB/s), io=59.8MiB (62.7MB), run=1002-1006msec 00:10:01.110 00:10:01.110 Disk stats (read/write): 00:10:01.110 nvme0n1: ios=3812/4096, merge=0/0, ticks=34042/34228, in_queue=68270, util=99.80% 00:10:01.110 nvme0n2: ios=3117/3488, merge=0/0, ticks=22461/22102, in_queue=44563, util=99.80% 00:10:01.110 nvme0n3: ios=3131/3163, merge=0/0, ticks=32837/27100, in_queue=59937, util=97.70% 00:10:01.110 nvme0n4: ios=2107/2551, merge=0/0, ticks=19574/32018, in_queue=51592, util=98.32% 00:10:01.110 04:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:01.110 [global] 00:10:01.110 thread=1 00:10:01.110 invalidate=1 00:10:01.110 rw=randwrite 00:10:01.110 time_based=1 00:10:01.110 runtime=1 00:10:01.110 ioengine=libaio 00:10:01.110 direct=1 00:10:01.110 bs=4096 00:10:01.110 iodepth=128 00:10:01.110 norandommap=0 00:10:01.110 numjobs=1 00:10:01.110 00:10:01.110 verify_dump=1 00:10:01.110 verify_backlog=512 00:10:01.110 verify_state_save=0 00:10:01.110 do_verify=1 00:10:01.110 verify=crc32c-intel 00:10:01.110 [job0] 00:10:01.110 filename=/dev/nvme0n1 00:10:01.110 [job1] 00:10:01.110 filename=/dev/nvme0n2 00:10:01.110 [job2] 00:10:01.110 filename=/dev/nvme0n3 00:10:01.110 [job3] 00:10:01.110 filename=/dev/nvme0n4 00:10:01.110 Could not set queue depth (nvme0n1) 00:10:01.110 Could not set queue depth (nvme0n2) 00:10:01.110 Could not set queue depth (nvme0n3) 00:10:01.110 Could not set queue depth (nvme0n4) 00:10:01.110 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.110 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.110 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.110 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.110 fio-3.35 00:10:01.110 Starting 4 threads 00:10:02.485 00:10:02.485 job0: (groupid=0, jobs=1): err= 0: pid=2233931: Mon Oct 28 04:45:52 2024 00:10:02.485 read: IOPS=3957, BW=15.5MiB/s (16.2MB/s)(15.6MiB/1008msec) 00:10:02.485 slat (usec): min=2, max=14819, avg=92.04, stdev=721.83 00:10:02.485 clat (usec): min=2264, max=48253, avg=13266.82, stdev=5240.16 00:10:02.485 lat (usec): min=2273, max=48256, avg=13358.86, stdev=5267.87 00:10:02.485 clat percentiles (usec): 00:10:02.485 | 1.00th=[ 4424], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[10028], 00:10:02.485 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11994], 60.00th=[12518], 00:10:02.485 | 70.00th=[12911], 80.00th=[16319], 90.00th=[19268], 95.00th=[21890], 00:10:02.485 | 99.00th=[35914], 99.50th=[47973], 99.90th=[48497], 99.95th=[48497], 00:10:02.485 | 99.99th=[48497] 00:10:02.485 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:10:02.485 slat (usec): min=3, max=16630, avg=123.47, stdev=848.13 00:10:02.485 clat (usec): min=360, max=114376, avg=18272.99, stdev=22476.53 00:10:02.485 lat (usec): min=401, max=114388, avg=18396.46, stdev=22609.15 00:10:02.485 clat percentiles (usec): 00:10:02.485 | 1.00th=[ 1057], 5.00th=[ 3884], 10.00th=[ 6521], 20.00th=[ 8586], 00:10:02.485 | 30.00th=[ 9765], 40.00th=[ 10290], 50.00th=[ 11207], 60.00th=[ 12518], 00:10:02.485 | 70.00th=[ 13960], 80.00th=[ 17957], 90.00th=[ 34341], 95.00th=[ 85459], 00:10:02.485 | 99.00th=[109577], 99.50th=[111674], 99.90th=[114820], 99.95th=[114820], 00:10:02.485 | 99.99th=[114820] 00:10:02.485 bw ( KiB/s): min=14288, max=18480, per=25.00%, avg=16384.00, stdev=2964.19, samples=2 00:10:02.485 iops : min= 3572, max= 4620, avg=4096.00, stdev=741.05, samples=2 00:10:02.485 lat (usec) : 500=0.01%, 750=0.17%, 1000=0.19% 00:10:02.485 lat (msec) : 2=1.27%, 4=1.34%, 10=21.86%, 20=63.82%, 50=7.69% 00:10:02.485 lat (msec) : 100=1.84%, 250=1.81% 00:10:02.485 cpu : usr=3.08%, sys=5.96%, ctx=353, majf=0, minf=1 00:10:02.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:02.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.485 issued rwts: total=3989,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.485 job1: (groupid=0, jobs=1): err= 0: pid=2233932: Mon Oct 28 04:45:52 2024 00:10:02.485 read: IOPS=4116, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1009msec) 00:10:02.485 slat (usec): min=3, max=22731, avg=117.59, stdev=931.61 00:10:02.485 clat (usec): min=1103, max=47970, avg=14469.80, stdev=5252.92 00:10:02.485 lat (usec): min=1117, max=47977, avg=14587.40, stdev=5319.18 00:10:02.485 clat percentiles (usec): 00:10:02.485 | 1.00th=[ 1483], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[11469], 00:10:02.485 | 30.00th=[11731], 40.00th=[12518], 50.00th=[13566], 60.00th=[14091], 00:10:02.485 | 70.00th=[15533], 80.00th=[16909], 90.00th=[19006], 95.00th=[23987], 00:10:02.485 | 99.00th=[39060], 99.50th=[46400], 99.90th=[47973], 99.95th=[47973], 00:10:02.485 | 99.99th=[47973] 00:10:02.485 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:10:02.485 slat (usec): min=3, max=25797, avg=98.73, stdev=736.53 00:10:02.485 clat (usec): min=988, max=47971, avg=14692.89, stdev=8756.53 00:10:02.485 lat (usec): min=997, max=47980, avg=14791.62, stdev=8812.72 00:10:02.485 clat percentiles (usec): 00:10:02.485 | 1.00th=[ 4228], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 9241], 00:10:02.485 | 30.00th=[10683], 40.00th=[11207], 50.00th=[12125], 60.00th=[12518], 00:10:02.485 | 70.00th=[13173], 80.00th=[15664], 90.00th=[29754], 95.00th=[38011], 00:10:02.485 | 99.00th=[44303], 99.50th=[45351], 99.90th=[46924], 99.95th=[46924], 00:10:02.485 | 99.99th=[47973] 00:10:02.485 bw ( KiB/s): min=15824, max=20480, per=27.70%, avg=18152.00, stdev=3292.29, samples=2 00:10:02.485 iops : min= 3956, max= 5120, avg=4538.00, stdev=823.07, samples=2 00:10:02.485 lat (usec) : 1000=0.05% 00:10:02.485 lat (msec) : 2=0.57%, 4=0.35%, 10=15.32%, 20=70.79%, 50=12.92% 00:10:02.485 cpu : usr=3.57%, sys=5.56%, ctx=375, majf=0, minf=1 00:10:02.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:02.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.486 issued rwts: total=4154,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.486 job2: (groupid=0, jobs=1): err= 0: pid=2233933: Mon Oct 28 04:45:52 2024 00:10:02.486 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:10:02.486 slat (usec): min=2, max=20050, avg=125.93, stdev=934.99 00:10:02.486 clat (usec): min=6779, max=59631, avg=16752.33, stdev=7947.23 00:10:02.486 lat (usec): min=6785, max=59651, avg=16878.26, stdev=8011.71 00:10:02.486 clat percentiles (usec): 00:10:02.486 | 1.00th=[ 6849], 5.00th=[ 7701], 10.00th=[ 9241], 20.00th=[12125], 00:10:02.486 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14877], 60.00th=[16188], 00:10:02.486 | 70.00th=[16909], 80.00th=[19530], 90.00th=[23725], 95.00th=[38011], 00:10:02.486 | 99.00th=[43779], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:10:02.486 | 99.99th=[59507] 00:10:02.486 write: IOPS=3214, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1002msec); 0 zone resets 00:10:02.486 slat (usec): min=3, max=14016, avg=174.32, stdev=1034.71 00:10:02.486 clat (usec): min=526, max=114372, avg=23511.83, stdev=24052.63 00:10:02.486 lat (usec): min=945, max=114389, avg=23686.14, stdev=24214.12 00:10:02.486 clat percentiles (msec): 00:10:02.486 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12], 00:10:02.486 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 16], 00:10:02.486 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 53], 95.00th=[ 99], 00:10:02.486 | 99.00th=[ 111], 99.50th=[ 112], 99.90th=[ 115], 99.95th=[ 115], 00:10:02.486 | 99.99th=[ 115] 00:10:02.486 bw ( KiB/s): min=10472, max=14272, per=18.88%, avg=12372.00, stdev=2687.01, samples=2 00:10:02.486 iops : min= 2618, max= 3568, avg=3093.00, stdev=671.75, samples=2 00:10:02.486 lat (usec) : 750=0.02%, 1000=0.03% 00:10:02.486 lat (msec) : 10=11.39%, 20=68.68%, 50=13.71%, 100=3.72%, 250=2.45% 00:10:02.486 cpu : usr=2.60%, sys=6.09%, ctx=267, majf=0, minf=2 00:10:02.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:02.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.486 issued rwts: total=3072,3221,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.486 job3: (groupid=0, jobs=1): err= 0: pid=2233936: Mon Oct 28 04:45:52 2024 00:10:02.486 read: IOPS=4157, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1004msec) 00:10:02.486 slat (usec): min=3, max=16060, avg=105.60, stdev=598.24 00:10:02.486 clat (usec): min=887, max=44075, avg=14024.31, stdev=5150.36 00:10:02.486 lat (usec): min=4054, max=47115, avg=14129.91, stdev=5166.98 00:10:02.486 clat percentiles (usec): 00:10:02.486 | 1.00th=[ 4883], 5.00th=[ 9241], 10.00th=[10814], 20.00th=[11076], 00:10:02.486 | 30.00th=[11600], 40.00th=[12780], 50.00th=[13173], 60.00th=[13829], 00:10:02.486 | 70.00th=[14353], 80.00th=[14746], 90.00th=[17433], 95.00th=[24249], 00:10:02.486 | 99.00th=[42206], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:10:02.486 | 99.99th=[44303] 00:10:02.486 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:10:02.486 slat (usec): min=4, max=28473, avg=107.54, stdev=811.89 00:10:02.486 clat (usec): min=1029, max=51295, avg=14906.51, stdev=7687.46 00:10:02.486 lat (usec): min=1037, max=54358, avg=15014.05, stdev=7735.29 00:10:02.486 clat percentiles (usec): 00:10:02.486 | 1.00th=[ 6390], 5.00th=[ 8979], 10.00th=[10683], 20.00th=[11207], 00:10:02.486 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12780], 60.00th=[13173], 00:10:02.486 | 70.00th=[13435], 80.00th=[14222], 90.00th=[25560], 95.00th=[38011], 00:10:02.486 | 99.00th=[44303], 99.50th=[44303], 99.90th=[47449], 99.95th=[47449], 00:10:02.486 | 99.99th=[51119] 00:10:02.486 bw ( KiB/s): min=16680, max=19784, per=27.82%, avg=18232.00, stdev=2194.86, samples=2 00:10:02.486 iops : min= 4170, max= 4946, avg=4558.00, stdev=548.71, samples=2 00:10:02.486 lat (usec) : 1000=0.01% 00:10:02.486 lat (msec) : 2=0.03%, 10=7.58%, 20=82.92%, 50=9.44%, 100=0.01% 00:10:02.486 cpu : usr=6.68%, sys=10.97%, ctx=425, majf=0, minf=1 00:10:02.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:02.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.486 issued rwts: total=4174,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.486 00:10:02.486 Run status group 0 (all jobs): 00:10:02.486 READ: bw=59.6MiB/s (62.5MB/s), 12.0MiB/s-16.2MiB/s (12.6MB/s-17.0MB/s), io=60.1MiB (63.0MB), run=1002-1009msec 00:10:02.486 WRITE: bw=64.0MiB/s (67.1MB/s), 12.6MiB/s-17.9MiB/s (13.2MB/s-18.8MB/s), io=64.6MiB (67.7MB), run=1002-1009msec 00:10:02.486 00:10:02.486 Disk stats (read/write): 00:10:02.486 nvme0n1: ios=3096/3233, merge=0/0, ticks=31890/59012, in_queue=90902, util=99.30% 00:10:02.486 nvme0n2: ios=3614/3799, merge=0/0, ticks=50915/47376, in_queue=98291, util=98.36% 00:10:02.486 nvme0n3: ios=2048/2391, merge=0/0, ticks=18295/26918, in_queue=45213, util=87.90% 00:10:02.486 nvme0n4: ios=3604/3618, merge=0/0, ticks=18062/20257, in_queue=38319, util=97.15% 00:10:02.486 04:45:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:02.486 04:45:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2234068 00:10:02.486 04:45:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:02.486 04:45:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:02.486 [global] 00:10:02.486 thread=1 00:10:02.486 invalidate=1 00:10:02.486 rw=read 00:10:02.486 time_based=1 00:10:02.486 runtime=10 00:10:02.486 ioengine=libaio 00:10:02.486 direct=1 00:10:02.486 bs=4096 00:10:02.486 iodepth=1 00:10:02.486 norandommap=1 00:10:02.486 numjobs=1 00:10:02.486 00:10:02.486 [job0] 00:10:02.486 filename=/dev/nvme0n1 00:10:02.486 [job1] 00:10:02.486 filename=/dev/nvme0n2 00:10:02.486 [job2] 00:10:02.486 filename=/dev/nvme0n3 00:10:02.486 [job3] 00:10:02.486 filename=/dev/nvme0n4 00:10:02.486 Could not set queue depth (nvme0n1) 00:10:02.486 Could not set queue depth (nvme0n2) 00:10:02.486 Could not set queue depth (nvme0n3) 00:10:02.486 Could not set queue depth (nvme0n4) 00:10:02.744 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.744 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.744 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.744 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.744 fio-3.35 00:10:02.744 Starting 4 threads 00:10:06.024 04:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:06.024 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:06.024 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=299008, buflen=4096 00:10:06.024 fio: pid=2234283, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:06.024 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=39202816, buflen=4096 00:10:06.024 fio: pid=2234282, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:06.024 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.024 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:06.282 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=32223232, buflen=4096 00:10:06.282 fio: pid=2234280, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:06.282 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.282 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:06.540 04:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.540 04:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:06.540 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=376832, buflen=4096 00:10:06.540 fio: pid=2234281, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:06.798 00:10:06.798 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2234280: Mon Oct 28 04:45:57 2024 00:10:06.798 read: IOPS=2249, BW=8999KiB/s (9215kB/s)(30.7MiB/3497msec) 00:10:06.798 slat (usec): min=5, max=16443, avg=19.42, stdev=316.87 00:10:06.798 clat (usec): min=226, max=41395, avg=418.19, stdev=2099.80 00:10:06.798 lat (usec): min=233, max=41411, avg=437.61, stdev=2124.20 00:10:06.798 clat percentiles (usec): 00:10:06.798 | 1.00th=[ 235], 5.00th=[ 245], 10.00th=[ 260], 20.00th=[ 289], 00:10:06.798 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 318], 00:10:06.798 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 334], 95.00th=[ 343], 00:10:06.798 | 99.00th=[ 482], 99.50th=[ 676], 99.90th=[41157], 99.95th=[41157], 00:10:06.798 | 99.99th=[41157] 00:10:06.798 bw ( KiB/s): min= 104, max=12488, per=46.05%, avg=8530.67, stdev=5362.13, samples=6 00:10:06.798 iops : min= 26, max= 3122, avg=2132.67, stdev=1340.53, samples=6 00:10:06.798 lat (usec) : 250=6.69%, 500=92.40%, 750=0.46%, 1000=0.15% 00:10:06.798 lat (msec) : 2=0.03%, 50=0.27% 00:10:06.798 cpu : usr=1.89%, sys=4.20%, ctx=7874, majf=0, minf=1 00:10:06.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.798 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.798 issued rwts: total=7868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.798 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2234281: Mon Oct 28 04:45:57 2024 00:10:06.798 read: IOPS=24, BW=96.8KiB/s (99.1kB/s)(368KiB/3801msec) 00:10:06.798 slat (usec): min=13, max=16881, avg=438.58, stdev=2455.86 00:10:06.798 clat (usec): min=523, max=41930, avg=40550.01, stdev=4220.37 00:10:06.798 lat (usec): min=544, max=57992, avg=40992.98, stdev=4939.96 00:10:06.798 clat percentiles (usec): 00:10:06.798 | 1.00th=[ 523], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:06.798 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:06.798 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:06.798 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:06.798 | 99.99th=[41681] 00:10:06.798 bw ( KiB/s): min= 86, max= 104, per=0.52%, avg=96.86, stdev= 6.09, samples=7 00:10:06.798 iops : min= 21, max= 26, avg=24.14, stdev= 1.68, samples=7 00:10:06.798 lat (usec) : 750=1.08% 00:10:06.798 lat (msec) : 50=97.85% 00:10:06.798 cpu : usr=0.08%, sys=0.00%, ctx=98, majf=0, minf=2 00:10:06.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.798 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.798 issued rwts: total=93,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.798 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2234282: Mon Oct 28 04:45:57 2024 00:10:06.798 read: IOPS=2993, BW=11.7MiB/s (12.3MB/s)(37.4MiB/3198msec) 00:10:06.798 slat (nsec): min=5875, max=60678, avg=13302.76, stdev=5767.25 00:10:06.798 clat (usec): min=246, max=1154, avg=316.38, stdev=34.27 00:10:06.798 lat (usec): min=254, max=1164, avg=329.68, stdev=37.18 00:10:06.798 clat percentiles (usec): 00:10:06.798 | 1.00th=[ 265], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 297], 00:10:06.798 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 318], 00:10:06.799 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 355], 00:10:06.799 | 99.00th=[ 461], 99.50th=[ 529], 99.90th=[ 586], 99.95th=[ 701], 00:10:06.799 | 99.99th=[ 1156] 00:10:06.799 bw ( KiB/s): min=11200, max=12672, per=64.52%, avg=11952.00, stdev=606.00, samples=6 00:10:06.799 iops : min= 2800, max= 3168, avg=2988.00, stdev=151.50, samples=6 00:10:06.799 lat (usec) : 250=0.05%, 500=99.34%, 750=0.56%, 1000=0.02% 00:10:06.799 lat (msec) : 2=0.01% 00:10:06.799 cpu : usr=2.69%, sys=5.97%, ctx=9574, majf=0, minf=2 00:10:06.799 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.799 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.799 issued rwts: total=9572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.799 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.799 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2234283: Mon Oct 28 04:45:57 2024 00:10:06.799 read: IOPS=25, BW=99.5KiB/s (102kB/s)(292KiB/2935msec) 00:10:06.799 slat (nsec): min=13611, max=48328, avg=25305.97, stdev=9036.17 00:10:06.799 clat (usec): min=382, max=41176, avg=39855.97, stdev=6657.75 00:10:06.799 lat (usec): min=408, max=41205, avg=39881.36, stdev=6657.61 00:10:06.799 clat percentiles (usec): 00:10:06.799 | 1.00th=[ 383], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:06.799 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:06.799 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:06.799 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:06.799 | 99.99th=[41157] 00:10:06.799 bw ( KiB/s): min= 96, max= 112, per=0.54%, avg=100.80, stdev= 7.16, samples=5 00:10:06.799 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:10:06.799 lat (usec) : 500=1.35%, 750=1.35% 00:10:06.799 lat (msec) : 50=95.95% 00:10:06.799 cpu : usr=0.00%, sys=0.10%, ctx=74, majf=0, minf=1 00:10:06.799 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.799 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.799 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.799 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.799 00:10:06.799 Run status group 0 (all jobs): 00:10:06.799 READ: bw=18.1MiB/s (19.0MB/s), 96.8KiB/s-11.7MiB/s (99.1kB/s-12.3MB/s), io=68.8MiB (72.1MB), run=2935-3801msec 00:10:06.799 00:10:06.799 Disk stats (read/write): 00:10:06.799 nvme0n1: ios=7474/0, merge=0/0, ticks=4153/0, in_queue=4153, util=98.11% 00:10:06.799 nvme0n2: ios=129/0, merge=0/0, ticks=4548/0, in_queue=4548, util=98.69% 00:10:06.799 nvme0n3: ios=9346/0, merge=0/0, ticks=3869/0, in_queue=3869, util=99.47% 00:10:06.799 nvme0n4: ios=71/0, merge=0/0, ticks=2829/0, in_queue=2829, util=96.75% 00:10:07.057 04:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.057 04:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:07.315 04:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.315 04:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:07.573 04:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.573 04:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:07.831 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.831 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:08.090 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:08.090 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2234068 00:10:08.090 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:08.090 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:08.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.090 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:08.090 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:08.090 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:08.090 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.090 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:08.090 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.090 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:08.090 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:08.090 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:08.090 nvmf hotplug test: fio failed as expected 00:10:08.090 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:08.658 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:08.658 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:08.658 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:08.658 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:08.658 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:08.658 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:08.658 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:08.658 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.658 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:08.658 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.658 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.658 rmmod nvme_tcp 00:10:08.658 rmmod nvme_fabrics 00:10:08.658 rmmod nvme_keyring 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 2232057 ']' 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 2232057 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2232057 ']' 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2232057 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2232057 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2232057' 00:10:08.658 killing process with pid 2232057 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2232057 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2232057 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.658 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.197 00:10:11.197 real 0m25.050s 00:10:11.197 user 1m28.894s 00:10:11.197 sys 0m6.692s 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.197 ************************************ 00:10:11.197 END TEST nvmf_fio_target 00:10:11.197 ************************************ 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.197 ************************************ 00:10:11.197 START TEST nvmf_bdevio 00:10:11.197 ************************************ 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:11.197 * Looking for test storage... 00:10:11.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1689 -- # lcov --version 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.197 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:11.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.197 --rc genhtml_branch_coverage=1 00:10:11.197 --rc genhtml_function_coverage=1 00:10:11.197 --rc genhtml_legend=1 00:10:11.197 --rc geninfo_all_blocks=1 00:10:11.198 --rc geninfo_unexecuted_blocks=1 00:10:11.198 00:10:11.198 ' 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:11.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.198 --rc genhtml_branch_coverage=1 00:10:11.198 --rc genhtml_function_coverage=1 00:10:11.198 --rc genhtml_legend=1 00:10:11.198 --rc geninfo_all_blocks=1 00:10:11.198 --rc geninfo_unexecuted_blocks=1 00:10:11.198 00:10:11.198 ' 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:11.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.198 --rc genhtml_branch_coverage=1 00:10:11.198 --rc genhtml_function_coverage=1 00:10:11.198 --rc genhtml_legend=1 00:10:11.198 --rc geninfo_all_blocks=1 00:10:11.198 --rc geninfo_unexecuted_blocks=1 00:10:11.198 00:10:11.198 ' 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:11.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.198 --rc genhtml_branch_coverage=1 00:10:11.198 --rc genhtml_function_coverage=1 00:10:11.198 --rc genhtml_legend=1 00:10:11.198 --rc geninfo_all_blocks=1 00:10:11.198 --rc geninfo_unexecuted_blocks=1 00:10:11.198 00:10:11.198 ' 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.198 04:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:13.102 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:13.102 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:13.102 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:13.103 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:13.103 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:13.103 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.362 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.362 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.362 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:13.362 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:13.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:10:13.362 00:10:13.362 --- 10.0.0.2 ping statistics --- 00:10:13.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.362 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:10:13.362 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:10:13.362 00:10:13.362 --- 10.0.0.1 ping statistics --- 00:10:13.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.362 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:10:13.362 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.362 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:10:13.362 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:13.362 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.362 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:13.362 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:13.362 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.362 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:13.362 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:13.362 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:13.363 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:13.363 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:13.363 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:13.363 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=2236887 00:10:13.363 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:13.363 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 2236887 00:10:13.363 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2236887 ']' 00:10:13.363 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.363 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.363 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.363 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.363 04:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:13.363 [2024-10-28 04:46:03.812374] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:10:13.363 [2024-10-28 04:46:03.812454] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.363 [2024-10-28 04:46:03.956277] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:13.621 [2024-10-28 04:46:03.997509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.621 [2024-10-28 04:46:04.049727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.621 [2024-10-28 04:46:04.049804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.621 [2024-10-28 04:46:04.049821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.621 [2024-10-28 04:46:04.049834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.621 [2024-10-28 04:46:04.049846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.621 [2024-10-28 04:46:04.051604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:13.621 [2024-10-28 04:46:04.051666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:13.621 [2024-10-28 04:46:04.051719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:13.621 [2024-10-28 04:46:04.051723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.555 [2024-10-28 04:46:04.867072] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.555 Malloc0 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.555 [2024-10-28 04:46:04.931433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:14.555 { 00:10:14.555 "params": { 00:10:14.555 "name": "Nvme$subsystem", 00:10:14.555 "trtype": "$TEST_TRANSPORT", 00:10:14.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.555 "adrfam": "ipv4", 00:10:14.555 "trsvcid": "$NVMF_PORT", 00:10:14.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.555 "hdgst": ${hdgst:-false}, 00:10:14.555 "ddgst": ${ddgst:-false} 00:10:14.555 }, 00:10:14.555 "method": "bdev_nvme_attach_controller" 00:10:14.555 } 00:10:14.555 EOF 00:10:14.555 )") 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:10:14.555 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:14.555 "params": { 00:10:14.555 "name": "Nvme1", 00:10:14.555 "trtype": "tcp", 00:10:14.555 "traddr": "10.0.0.2", 00:10:14.555 "adrfam": "ipv4", 00:10:14.555 "trsvcid": "4420", 00:10:14.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:14.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:14.555 "hdgst": false, 00:10:14.555 "ddgst": false 00:10:14.555 }, 00:10:14.555 "method": "bdev_nvme_attach_controller" 00:10:14.555 }' 00:10:14.555 [2024-10-28 04:46:04.982504] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:10:14.555 [2024-10-28 04:46:04.982571] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237042 ] 00:10:14.555 [2024-10-28 04:46:05.114452] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:14.814 [2024-10-28 04:46:05.153895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:14.814 [2024-10-28 04:46:05.206676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.814 [2024-10-28 04:46:05.206705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.814 [2024-10-28 04:46:05.206708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.072 I/O targets: 00:10:15.072 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:15.072 00:10:15.072 00:10:15.072 CUnit - A unit testing framework for C - Version 2.1-3 00:10:15.072 http://cunit.sourceforge.net/ 00:10:15.072 00:10:15.072 00:10:15.072 Suite: bdevio tests on: Nvme1n1 00:10:15.072 Test: blockdev write read block ...passed 00:10:15.072 Test: blockdev write zeroes read block ...passed 00:10:15.072 Test: blockdev write zeroes read no split ...passed 00:10:15.330 Test: blockdev write zeroes read split ...passed 00:10:15.330 Test: blockdev write zeroes read split partial ...passed 00:10:15.330 Test: blockdev reset ...[2024-10-28 04:46:05.711173] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:15.330 [2024-10-28 04:46:05.711293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e1600 (9): Bad file descriptor 00:10:15.330 [2024-10-28 04:46:05.772064] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:15.330 passed 00:10:15.330 Test: blockdev write read 8 blocks ...passed 00:10:15.330 Test: blockdev write read size > 128k ...passed 00:10:15.330 Test: blockdev write read invalid size ...passed 00:10:15.330 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:15.330 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:15.330 Test: blockdev write read max offset ...passed 00:10:15.588 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:15.588 Test: blockdev writev readv 8 blocks ...passed 00:10:15.588 Test: blockdev writev readv 30 x 1block ...passed 00:10:15.588 Test: blockdev writev readv block ...passed 00:10:15.588 Test: blockdev writev readv size > 128k ...passed 00:10:15.588 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:15.588 Test: blockdev comparev and writev ...[2024-10-28 04:46:05.985597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.588 [2024-10-28 04:46:05.985648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:15.588 [2024-10-28 04:46:05.985677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.588 [2024-10-28 04:46:05.985695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:15.588 [2024-10-28 04:46:05.986073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.588 [2024-10-28 04:46:05.986098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:15.588 [2024-10-28 04:46:05.986122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.588 [2024-10-28 04:46:05.986138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:15.588 [2024-10-28 04:46:05.986488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.588 [2024-10-28 04:46:05.986512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:15.588 [2024-10-28 04:46:05.986535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.588 [2024-10-28 04:46:05.986551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:15.588 [2024-10-28 04:46:05.986918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.588 [2024-10-28 04:46:05.986942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:15.588 [2024-10-28 04:46:05.986964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.588 [2024-10-28 04:46:05.986993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:15.588 passed 00:10:15.588 Test: blockdev nvme passthru rw ...passed 00:10:15.588 Test: blockdev nvme passthru vendor specific ...[2024-10-28 04:46:06.068960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:15.588 [2024-10-28 04:46:06.068987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:15.588 [2024-10-28 04:46:06.069162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:15.588 [2024-10-28 04:46:06.069185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:15.588 [2024-10-28 04:46:06.069359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:15.588 [2024-10-28 04:46:06.069382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:15.588 [2024-10-28 04:46:06.069554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:15.588 [2024-10-28 04:46:06.069577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:15.588 passed 00:10:15.588 Test: blockdev nvme admin passthru ...passed 00:10:15.588 Test: blockdev copy ...passed 00:10:15.588 00:10:15.588 Run Summary: Type Total Ran Passed Failed Inactive 00:10:15.588 suites 1 1 n/a 0 0 00:10:15.588 tests 23 23 23 0 0 00:10:15.588 asserts 152 152 152 0 n/a 00:10:15.588 00:10:15.588 Elapsed time = 1.137 seconds 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:15.847 rmmod nvme_tcp 00:10:15.847 rmmod nvme_fabrics 00:10:15.847 rmmod nvme_keyring 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 2236887 ']' 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 2236887 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2236887 ']' 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2236887 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2236887 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2236887' 00:10:15.847 killing process with pid 2236887 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2236887 00:10:15.847 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2236887 00:10:16.105 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:16.105 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:16.105 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:16.105 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:16.105 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:10:16.105 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:16.105 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:10:16.105 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.105 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:16.105 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.105 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.105 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:18.639 00:10:18.639 real 0m7.357s 00:10:18.639 user 0m13.931s 00:10:18.639 sys 0m2.223s 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.639 ************************************ 00:10:18.639 END TEST nvmf_bdevio 00:10:18.639 ************************************ 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:18.639 00:10:18.639 real 4m6.253s 00:10:18.639 user 10m41.627s 00:10:18.639 sys 1m8.031s 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.639 ************************************ 00:10:18.639 END TEST nvmf_target_core 00:10:18.639 ************************************ 00:10:18.639 04:46:08 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:18.639 04:46:08 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:18.639 04:46:08 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.639 04:46:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:18.639 ************************************ 00:10:18.639 START TEST nvmf_target_extra 00:10:18.639 ************************************ 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:18.639 * Looking for test storage... 00:10:18.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1689 -- # lcov --version 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:18.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.639 --rc genhtml_branch_coverage=1 00:10:18.639 --rc genhtml_function_coverage=1 00:10:18.639 --rc genhtml_legend=1 00:10:18.639 --rc geninfo_all_blocks=1 00:10:18.639 --rc geninfo_unexecuted_blocks=1 00:10:18.639 00:10:18.639 ' 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:18.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.639 --rc genhtml_branch_coverage=1 00:10:18.639 --rc genhtml_function_coverage=1 00:10:18.639 --rc genhtml_legend=1 00:10:18.639 --rc geninfo_all_blocks=1 00:10:18.639 --rc geninfo_unexecuted_blocks=1 00:10:18.639 00:10:18.639 ' 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:18.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.639 --rc genhtml_branch_coverage=1 00:10:18.639 --rc genhtml_function_coverage=1 00:10:18.639 --rc genhtml_legend=1 00:10:18.639 --rc geninfo_all_blocks=1 00:10:18.639 --rc geninfo_unexecuted_blocks=1 00:10:18.639 00:10:18.639 ' 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:18.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.639 --rc genhtml_branch_coverage=1 00:10:18.639 --rc genhtml_function_coverage=1 00:10:18.639 --rc genhtml_legend=1 00:10:18.639 --rc geninfo_all_blocks=1 00:10:18.639 --rc geninfo_unexecuted_blocks=1 00:10:18.639 00:10:18.639 ' 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.639 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:18.640 ************************************ 00:10:18.640 START TEST nvmf_example 00:10:18.640 ************************************ 00:10:18.640 04:46:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:18.640 * Looking for test storage... 00:10:18.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1689 -- # lcov --version 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:18.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.640 --rc genhtml_branch_coverage=1 00:10:18.640 --rc genhtml_function_coverage=1 00:10:18.640 --rc genhtml_legend=1 00:10:18.640 --rc geninfo_all_blocks=1 00:10:18.640 --rc geninfo_unexecuted_blocks=1 00:10:18.640 00:10:18.640 ' 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:18.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.640 --rc genhtml_branch_coverage=1 00:10:18.640 --rc genhtml_function_coverage=1 00:10:18.640 --rc genhtml_legend=1 00:10:18.640 --rc geninfo_all_blocks=1 00:10:18.640 --rc geninfo_unexecuted_blocks=1 00:10:18.640 00:10:18.640 ' 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:18.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.640 --rc genhtml_branch_coverage=1 00:10:18.640 --rc genhtml_function_coverage=1 00:10:18.640 --rc genhtml_legend=1 00:10:18.640 --rc geninfo_all_blocks=1 00:10:18.640 --rc geninfo_unexecuted_blocks=1 00:10:18.640 00:10:18.640 ' 00:10:18.640 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:18.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.640 --rc genhtml_branch_coverage=1 00:10:18.640 --rc genhtml_function_coverage=1 00:10:18.640 --rc genhtml_legend=1 00:10:18.640 --rc geninfo_all_blocks=1 00:10:18.640 --rc geninfo_unexecuted_blocks=1 00:10:18.640 00:10:18.640 ' 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:18.641 04:46:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:21.175 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.175 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:21.176 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:21.176 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:21.176 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:21.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:10:21.176 00:10:21.176 --- 10.0.0.2 ping statistics --- 00:10:21.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.176 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:10:21.176 00:10:21.176 --- 10.0.0.1 ping statistics --- 00:10:21.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.176 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2239289 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2239289 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2239289 ']' 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.176 04:46:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:22.133 04:46:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:34.353 Initializing NVMe Controllers 00:10:34.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:34.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:34.353 Initialization complete. Launching workers. 00:10:34.353 ======================================================== 00:10:34.353 Latency(us) 00:10:34.353 Device Information : IOPS MiB/s Average min max 00:10:34.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14430.53 56.37 4435.71 889.62 15749.23 00:10:34.353 ======================================================== 00:10:34.353 Total : 14430.53 56.37 4435.71 889.62 15749.23 00:10:34.353 00:10:34.353 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:34.353 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:34.353 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:34.353 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:34.353 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.353 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:34.353 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.353 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.353 rmmod nvme_tcp 00:10:34.353 rmmod nvme_fabrics 00:10:34.353 rmmod nvme_keyring 00:10:34.353 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.353 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:34.353 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 2239289 ']' 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 2239289 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2239289 ']' 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2239289 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2239289 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2239289' 00:10:34.354 killing process with pid 2239289 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2239289 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2239289 00:10:34.354 nvmf threads initialize successfully 00:10:34.354 bdev subsystem init successfully 00:10:34.354 created a nvmf target service 00:10:34.354 create targets's poll groups done 00:10:34.354 all subsystems of target started 00:10:34.354 nvmf target is running 00:10:34.354 all subsystems of target stopped 00:10:34.354 destroy targets's poll groups done 00:10:34.354 destroyed the nvmf target service 00:10:34.354 bdev subsystem finish successfully 00:10:34.354 nvmf threads destroy successfully 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.354 04:46:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.922 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:34.922 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:34.922 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:34.922 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.922 00:10:34.922 real 0m16.521s 00:10:34.922 user 0m45.531s 00:10:34.922 sys 0m3.800s 00:10:34.922 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.922 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.922 ************************************ 00:10:34.922 END TEST nvmf_example 00:10:34.922 ************************************ 00:10:34.922 04:46:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:34.922 04:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:34.922 04:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.922 04:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:35.184 ************************************ 00:10:35.184 START TEST nvmf_filesystem 00:10:35.184 ************************************ 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:35.184 * Looking for test storage... 00:10:35.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # lcov --version 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.184 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:35.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.185 --rc genhtml_branch_coverage=1 00:10:35.185 --rc genhtml_function_coverage=1 00:10:35.185 --rc genhtml_legend=1 00:10:35.185 --rc geninfo_all_blocks=1 00:10:35.185 --rc geninfo_unexecuted_blocks=1 00:10:35.185 00:10:35.185 ' 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:35.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.185 --rc genhtml_branch_coverage=1 00:10:35.185 --rc genhtml_function_coverage=1 00:10:35.185 --rc genhtml_legend=1 00:10:35.185 --rc geninfo_all_blocks=1 00:10:35.185 --rc geninfo_unexecuted_blocks=1 00:10:35.185 00:10:35.185 ' 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:35.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.185 --rc genhtml_branch_coverage=1 00:10:35.185 --rc genhtml_function_coverage=1 00:10:35.185 --rc genhtml_legend=1 00:10:35.185 --rc geninfo_all_blocks=1 00:10:35.185 --rc geninfo_unexecuted_blocks=1 00:10:35.185 00:10:35.185 ' 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:35.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.185 --rc genhtml_branch_coverage=1 00:10:35.185 --rc genhtml_function_coverage=1 00:10:35.185 --rc genhtml_legend=1 00:10:35.185 --rc geninfo_all_blocks=1 00:10:35.185 --rc geninfo_unexecuted_blocks=1 00:10:35.185 00:10:35.185 ' 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:35.185 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:35.186 #define SPDK_CONFIG_H 00:10:35.186 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:35.186 #define SPDK_CONFIG_APPS 1 00:10:35.186 #define SPDK_CONFIG_ARCH native 00:10:35.186 #undef SPDK_CONFIG_ASAN 00:10:35.186 #undef SPDK_CONFIG_AVAHI 00:10:35.186 #undef SPDK_CONFIG_CET 00:10:35.186 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:35.186 #define SPDK_CONFIG_COVERAGE 1 00:10:35.186 #define SPDK_CONFIG_CROSS_PREFIX 00:10:35.186 #undef SPDK_CONFIG_CRYPTO 00:10:35.186 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:35.186 #undef SPDK_CONFIG_CUSTOMOCF 00:10:35.186 #undef SPDK_CONFIG_DAOS 00:10:35.186 #define SPDK_CONFIG_DAOS_DIR 00:10:35.186 #define SPDK_CONFIG_DEBUG 1 00:10:35.186 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:35.186 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:35.186 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:35.186 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:35.186 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:35.186 #undef SPDK_CONFIG_DPDK_UADK 00:10:35.186 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:35.186 #define SPDK_CONFIG_EXAMPLES 1 00:10:35.186 #undef SPDK_CONFIG_FC 00:10:35.186 #define SPDK_CONFIG_FC_PATH 00:10:35.186 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:35.186 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:35.186 #define SPDK_CONFIG_FSDEV 1 00:10:35.186 #undef SPDK_CONFIG_FUSE 00:10:35.186 #undef SPDK_CONFIG_FUZZER 00:10:35.186 #define SPDK_CONFIG_FUZZER_LIB 00:10:35.186 #undef SPDK_CONFIG_GOLANG 00:10:35.186 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:35.186 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:35.186 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:35.186 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:35.186 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:35.186 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:35.186 #undef SPDK_CONFIG_HAVE_LZ4 00:10:35.186 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:35.186 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:35.186 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:35.186 #define SPDK_CONFIG_IDXD 1 00:10:35.186 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:35.186 #undef SPDK_CONFIG_IPSEC_MB 00:10:35.186 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:35.186 #define SPDK_CONFIG_ISAL 1 00:10:35.186 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:35.186 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:35.186 #define SPDK_CONFIG_LIBDIR 00:10:35.186 #undef SPDK_CONFIG_LTO 00:10:35.186 #define SPDK_CONFIG_MAX_LCORES 128 00:10:35.186 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:35.186 #define SPDK_CONFIG_NVME_CUSE 1 00:10:35.186 #undef SPDK_CONFIG_OCF 00:10:35.186 #define SPDK_CONFIG_OCF_PATH 00:10:35.186 #define SPDK_CONFIG_OPENSSL_PATH 00:10:35.186 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:35.186 #define SPDK_CONFIG_PGO_DIR 00:10:35.186 #undef SPDK_CONFIG_PGO_USE 00:10:35.186 #define SPDK_CONFIG_PREFIX /usr/local 00:10:35.186 #undef SPDK_CONFIG_RAID5F 00:10:35.186 #undef SPDK_CONFIG_RBD 00:10:35.186 #define SPDK_CONFIG_RDMA 1 00:10:35.186 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:35.186 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:35.186 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:35.186 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:35.186 #define SPDK_CONFIG_SHARED 1 00:10:35.186 #undef SPDK_CONFIG_SMA 00:10:35.186 #define SPDK_CONFIG_TESTS 1 00:10:35.186 #undef SPDK_CONFIG_TSAN 00:10:35.186 #define SPDK_CONFIG_UBLK 1 00:10:35.186 #define SPDK_CONFIG_UBSAN 1 00:10:35.186 #undef SPDK_CONFIG_UNIT_TESTS 00:10:35.186 #undef SPDK_CONFIG_URING 00:10:35.186 #define SPDK_CONFIG_URING_PATH 00:10:35.186 #undef SPDK_CONFIG_URING_ZNS 00:10:35.186 #undef SPDK_CONFIG_USDT 00:10:35.186 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:35.186 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:35.186 #define SPDK_CONFIG_VFIO_USER 1 00:10:35.186 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:35.186 #define SPDK_CONFIG_VHOST 1 00:10:35.186 #define SPDK_CONFIG_VIRTIO 1 00:10:35.186 #undef SPDK_CONFIG_VTUNE 00:10:35.186 #define SPDK_CONFIG_VTUNE_DIR 00:10:35.186 #define SPDK_CONFIG_WERROR 1 00:10:35.186 #define SPDK_CONFIG_WPDK_DIR 00:10:35.186 #undef SPDK_CONFIG_XNVME 00:10:35.186 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:35.186 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : main 00:10:35.187 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:35.188 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:35.449 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2240957 ]] 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2240957 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1674 -- # set_test_storage 2147483648 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.znEP5M 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.znEP5M/tests/target /tmp/spdk.znEP5M 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=53662437376 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988532224 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=8326094848 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30982897664 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375269376 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22437888 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30993211392 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994268160 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1056768 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:35.450 * Looking for test storage... 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=53662437376 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=10540687360 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set -o errtrace 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1677 -- # shopt -s extdebug 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # true 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # xtrace_fd 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:35.450 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # lcov --version 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:35.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.451 --rc genhtml_branch_coverage=1 00:10:35.451 --rc genhtml_function_coverage=1 00:10:35.451 --rc genhtml_legend=1 00:10:35.451 --rc geninfo_all_blocks=1 00:10:35.451 --rc geninfo_unexecuted_blocks=1 00:10:35.451 00:10:35.451 ' 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:35.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.451 --rc genhtml_branch_coverage=1 00:10:35.451 --rc genhtml_function_coverage=1 00:10:35.451 --rc genhtml_legend=1 00:10:35.451 --rc geninfo_all_blocks=1 00:10:35.451 --rc geninfo_unexecuted_blocks=1 00:10:35.451 00:10:35.451 ' 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:35.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.451 --rc genhtml_branch_coverage=1 00:10:35.451 --rc genhtml_function_coverage=1 00:10:35.451 --rc genhtml_legend=1 00:10:35.451 --rc geninfo_all_blocks=1 00:10:35.451 --rc geninfo_unexecuted_blocks=1 00:10:35.451 00:10:35.451 ' 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:35.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.451 --rc genhtml_branch_coverage=1 00:10:35.451 --rc genhtml_function_coverage=1 00:10:35.451 --rc genhtml_legend=1 00:10:35.451 --rc geninfo_all_blocks=1 00:10:35.451 --rc geninfo_unexecuted_blocks=1 00:10:35.451 00:10:35.451 ' 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:35.451 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:35.452 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.452 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:35.452 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:35.452 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:35.452 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.452 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.452 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.452 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:35.452 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:35.452 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:35.452 04:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:37.982 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.982 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:37.983 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:37.983 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:37.983 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:37.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:10:37.983 00:10:37.983 --- 10.0.0.2 ping statistics --- 00:10:37.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.983 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:10:37.983 00:10:37.983 --- 10.0.0.1 ping statistics --- 00:10:37.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.983 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.983 ************************************ 00:10:37.983 START TEST nvmf_filesystem_no_in_capsule 00:10:37.983 ************************************ 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=2242678 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 2242678 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2242678 ']' 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.983 04:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.983 [2024-10-28 04:46:28.322881] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:10:37.983 [2024-10-28 04:46:28.322977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.983 [2024-10-28 04:46:28.461851] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:37.983 [2024-10-28 04:46:28.497151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.983 [2024-10-28 04:46:28.547487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.983 [2024-10-28 04:46:28.547550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.983 [2024-10-28 04:46:28.547566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.983 [2024-10-28 04:46:28.547580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.983 [2024-10-28 04:46:28.547592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.983 [2024-10-28 04:46:28.549410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.984 [2024-10-28 04:46:28.549479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.984 [2024-10-28 04:46:28.549570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.984 [2024-10-28 04:46:28.549573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.914 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.914 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:38.914 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:38.914 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:38.914 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.914 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.914 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:38.914 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:38.914 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.914 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.914 [2024-10-28 04:46:29.371382] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.914 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.914 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:38.914 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.914 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.176 Malloc1 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.176 [2024-10-28 04:46:29.563241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.176 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:39.176 { 00:10:39.176 "name": "Malloc1", 00:10:39.176 "aliases": [ 00:10:39.176 "410d5409-5419-496c-9357-ca88e53ea090" 00:10:39.176 ], 00:10:39.176 "product_name": "Malloc disk", 00:10:39.176 "block_size": 512, 00:10:39.176 "num_blocks": 1048576, 00:10:39.176 "uuid": "410d5409-5419-496c-9357-ca88e53ea090", 00:10:39.176 "assigned_rate_limits": { 00:10:39.176 "rw_ios_per_sec": 0, 00:10:39.176 "rw_mbytes_per_sec": 0, 00:10:39.176 "r_mbytes_per_sec": 0, 00:10:39.176 "w_mbytes_per_sec": 0 00:10:39.176 }, 00:10:39.176 "claimed": true, 00:10:39.176 "claim_type": "exclusive_write", 00:10:39.176 "zoned": false, 00:10:39.176 "supported_io_types": { 00:10:39.176 "read": true, 00:10:39.176 "write": true, 00:10:39.176 "unmap": true, 00:10:39.176 "flush": true, 00:10:39.176 "reset": true, 00:10:39.176 "nvme_admin": false, 00:10:39.176 "nvme_io": false, 00:10:39.176 "nvme_io_md": false, 00:10:39.176 "write_zeroes": true, 00:10:39.177 "zcopy": true, 00:10:39.177 "get_zone_info": false, 00:10:39.177 "zone_management": false, 00:10:39.177 "zone_append": false, 00:10:39.177 "compare": false, 00:10:39.177 "compare_and_write": false, 00:10:39.177 "abort": true, 00:10:39.177 "seek_hole": false, 00:10:39.177 "seek_data": false, 00:10:39.177 "copy": true, 00:10:39.177 "nvme_iov_md": false 00:10:39.177 }, 00:10:39.177 "memory_domains": [ 00:10:39.177 { 00:10:39.177 "dma_device_id": "system", 00:10:39.177 "dma_device_type": 1 00:10:39.177 }, 00:10:39.177 { 00:10:39.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.177 "dma_device_type": 2 00:10:39.177 } 00:10:39.177 ], 00:10:39.177 "driver_specific": {} 00:10:39.177 } 00:10:39.177 ]' 00:10:39.177 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:39.177 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:39.177 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:39.177 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:39.177 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:39.177 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:39.177 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:39.177 04:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:40.110 04:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:40.110 04:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:40.110 04:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:40.110 04:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:40.110 04:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:42.009 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:42.267 04:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:42.833 04:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:43.767 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:43.767 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:43.767 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:43.767 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.767 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.767 ************************************ 00:10:43.767 START TEST filesystem_ext4 00:10:43.767 ************************************ 00:10:43.767 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:43.767 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:43.767 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:43.767 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:43.767 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:43.767 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:43.767 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:43.767 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:43.767 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:43.767 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:43.767 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:43.767 mke2fs 1.47.0 (5-Feb-2023) 00:10:43.767 Discarding device blocks: 0/522240 done 00:10:43.767 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:43.767 Filesystem UUID: 2d675420-7a56-4f78-aaea-362289e83317 00:10:43.767 Superblock backups stored on blocks: 00:10:43.767 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:43.767 00:10:43.767 Allocating group tables: 0/64 done 00:10:43.767 Writing inode tables: 0/64 done 00:10:44.025 Creating journal (8192 blocks): done 00:10:44.025 Writing superblocks and filesystem accounting information: 0/64 done 00:10:44.025 00:10:44.025 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:44.025 04:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2242678 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:49.285 00:10:49.285 real 0m5.680s 00:10:49.285 user 0m0.024s 00:10:49.285 sys 0m0.047s 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:49.285 ************************************ 00:10:49.285 END TEST filesystem_ext4 00:10:49.285 ************************************ 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.285 ************************************ 00:10:49.285 START TEST filesystem_btrfs 00:10:49.285 ************************************ 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:49.285 04:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:49.850 btrfs-progs v6.8.1 00:10:49.850 See https://btrfs.readthedocs.io for more information. 00:10:49.850 00:10:49.850 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:49.850 NOTE: several default settings have changed in version 5.15, please make sure 00:10:49.850 this does not affect your deployments: 00:10:49.850 - DUP for metadata (-m dup) 00:10:49.850 - enabled no-holes (-O no-holes) 00:10:49.850 - enabled free-space-tree (-R free-space-tree) 00:10:49.850 00:10:49.850 Label: (null) 00:10:49.850 UUID: 12f9e316-77ec-4f69-b752-153613bc10b2 00:10:49.850 Node size: 16384 00:10:49.850 Sector size: 4096 (CPU page size: 4096) 00:10:49.850 Filesystem size: 510.00MiB 00:10:49.850 Block group profiles: 00:10:49.850 Data: single 8.00MiB 00:10:49.850 Metadata: DUP 32.00MiB 00:10:49.850 System: DUP 8.00MiB 00:10:49.850 SSD detected: yes 00:10:49.850 Zoned device: no 00:10:49.850 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:49.850 Checksum: crc32c 00:10:49.850 Number of devices: 1 00:10:49.850 Devices: 00:10:49.850 ID SIZE PATH 00:10:49.850 1 510.00MiB /dev/nvme0n1p1 00:10:49.850 00:10:49.850 04:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:49.850 04:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:50.417 04:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:50.417 04:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:50.417 04:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:50.417 04:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:50.417 04:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:50.417 04:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:50.417 04:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2242678 00:10:50.417 04:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:50.417 04:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:50.417 04:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:50.417 04:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:50.417 00:10:50.417 real 0m1.132s 00:10:50.417 user 0m0.016s 00:10:50.417 sys 0m0.103s 00:10:50.417 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.417 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:50.417 ************************************ 00:10:50.417 END TEST filesystem_btrfs 00:10:50.417 ************************************ 00:10:50.676 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:50.676 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:50.676 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.676 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.676 ************************************ 00:10:50.676 START TEST filesystem_xfs 00:10:50.676 ************************************ 00:10:50.676 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:50.676 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:50.676 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:50.676 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:50.676 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:50.676 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:50.676 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:50.676 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:50.676 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:50.676 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:50.676 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:50.676 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:50.676 = sectsz=512 attr=2, projid32bit=1 00:10:50.676 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:50.676 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:50.676 data = bsize=4096 blocks=130560, imaxpct=25 00:10:50.676 = sunit=0 swidth=0 blks 00:10:50.676 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:50.676 log =internal log bsize=4096 blocks=16384, version=2 00:10:50.676 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:50.676 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:51.610 Discarding blocks...Done. 00:10:51.610 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:51.610 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2242678 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:54.140 00:10:54.140 real 0m3.364s 00:10:54.140 user 0m0.016s 00:10:54.140 sys 0m0.059s 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:54.140 ************************************ 00:10:54.140 END TEST filesystem_xfs 00:10:54.140 ************************************ 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:54.140 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2242678 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2242678 ']' 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2242678 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2242678 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2242678' 00:10:54.141 killing process with pid 2242678 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2242678 00:10:54.141 04:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2242678 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:54.708 00:10:54.708 real 0m16.738s 00:10:54.708 user 1m4.703s 00:10:54.708 sys 0m2.128s 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.708 ************************************ 00:10:54.708 END TEST nvmf_filesystem_no_in_capsule 00:10:54.708 ************************************ 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.708 ************************************ 00:10:54.708 START TEST nvmf_filesystem_in_capsule 00:10:54.708 ************************************ 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=2244771 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 2244771 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2244771 ']' 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:54.708 04:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.708 [2024-10-28 04:46:45.106339] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:10:54.708 [2024-10-28 04:46:45.106418] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.708 [2024-10-28 04:46:45.245502] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:54.708 [2024-10-28 04:46:45.285998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.967 [2024-10-28 04:46:45.337693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.967 [2024-10-28 04:46:45.337772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.967 [2024-10-28 04:46:45.337786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.967 [2024-10-28 04:46:45.337797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.967 [2024-10-28 04:46:45.337806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.967 [2024-10-28 04:46:45.339558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.967 [2024-10-28 04:46:45.339612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.967 [2024-10-28 04:46:45.339732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.967 [2024-10-28 04:46:45.339736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.903 [2024-10-28 04:46:46.187021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.903 Malloc1 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.903 [2024-10-28 04:46:46.360490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:55.903 { 00:10:55.903 "name": "Malloc1", 00:10:55.903 "aliases": [ 00:10:55.903 "5b9a8b90-fb60-49b3-9445-b20e69871238" 00:10:55.903 ], 00:10:55.903 "product_name": "Malloc disk", 00:10:55.903 "block_size": 512, 00:10:55.903 "num_blocks": 1048576, 00:10:55.903 "uuid": "5b9a8b90-fb60-49b3-9445-b20e69871238", 00:10:55.903 "assigned_rate_limits": { 00:10:55.903 "rw_ios_per_sec": 0, 00:10:55.903 "rw_mbytes_per_sec": 0, 00:10:55.903 "r_mbytes_per_sec": 0, 00:10:55.903 "w_mbytes_per_sec": 0 00:10:55.903 }, 00:10:55.903 "claimed": true, 00:10:55.903 "claim_type": "exclusive_write", 00:10:55.903 "zoned": false, 00:10:55.903 "supported_io_types": { 00:10:55.903 "read": true, 00:10:55.903 "write": true, 00:10:55.903 "unmap": true, 00:10:55.903 "flush": true, 00:10:55.903 "reset": true, 00:10:55.903 "nvme_admin": false, 00:10:55.903 "nvme_io": false, 00:10:55.903 "nvme_io_md": false, 00:10:55.903 "write_zeroes": true, 00:10:55.903 "zcopy": true, 00:10:55.903 "get_zone_info": false, 00:10:55.903 "zone_management": false, 00:10:55.903 "zone_append": false, 00:10:55.903 "compare": false, 00:10:55.903 "compare_and_write": false, 00:10:55.903 "abort": true, 00:10:55.903 "seek_hole": false, 00:10:55.903 "seek_data": false, 00:10:55.903 "copy": true, 00:10:55.903 "nvme_iov_md": false 00:10:55.903 }, 00:10:55.903 "memory_domains": [ 00:10:55.903 { 00:10:55.903 "dma_device_id": "system", 00:10:55.903 "dma_device_type": 1 00:10:55.903 }, 00:10:55.903 { 00:10:55.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.903 "dma_device_type": 2 00:10:55.903 } 00:10:55.903 ], 00:10:55.903 "driver_specific": {} 00:10:55.903 } 00:10:55.903 ]' 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:55.903 04:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:56.839 04:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.839 04:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:56.839 04:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.839 04:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:56.839 04:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:58.738 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:58.996 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:59.252 04:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:00.186 04:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:00.186 04:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:00.186 04:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:00.186 04:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.186 04:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.186 ************************************ 00:11:00.186 START TEST filesystem_in_capsule_ext4 00:11:00.186 ************************************ 00:11:00.186 04:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:00.186 04:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:00.186 04:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:00.186 04:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:00.186 04:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:00.186 04:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:00.186 04:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:00.186 04:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:00.186 04:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:00.186 04:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:00.186 04:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:00.186 mke2fs 1.47.0 (5-Feb-2023) 00:11:00.444 Discarding device blocks: 0/522240 done 00:11:00.444 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:00.444 Filesystem UUID: 76a31311-afed-4713-ad83-d2315b026c1c 00:11:00.444 Superblock backups stored on blocks: 00:11:00.444 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:00.444 00:11:00.444 Allocating group tables: 0/64 done 00:11:00.444 Writing inode tables: 0/64 done 00:11:00.702 Creating journal (8192 blocks): done 00:11:02.898 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:11:02.898 00:11:02.898 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:02.898 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2244771 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.454 00:11:09.454 real 0m8.180s 00:11:09.454 user 0m0.021s 00:11:09.454 sys 0m0.066s 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:09.454 ************************************ 00:11:09.454 END TEST filesystem_in_capsule_ext4 00:11:09.454 ************************************ 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.454 ************************************ 00:11:09.454 START TEST filesystem_in_capsule_btrfs 00:11:09.454 ************************************ 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:09.454 04:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:09.454 btrfs-progs v6.8.1 00:11:09.454 See https://btrfs.readthedocs.io for more information. 00:11:09.454 00:11:09.454 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:09.454 NOTE: several default settings have changed in version 5.15, please make sure 00:11:09.454 this does not affect your deployments: 00:11:09.454 - DUP for metadata (-m dup) 00:11:09.454 - enabled no-holes (-O no-holes) 00:11:09.454 - enabled free-space-tree (-R free-space-tree) 00:11:09.454 00:11:09.454 Label: (null) 00:11:09.454 UUID: cd3d210c-7389-4a8e-95d9-67aff46706b1 00:11:09.454 Node size: 16384 00:11:09.454 Sector size: 4096 (CPU page size: 4096) 00:11:09.454 Filesystem size: 510.00MiB 00:11:09.454 Block group profiles: 00:11:09.454 Data: single 8.00MiB 00:11:09.454 Metadata: DUP 32.00MiB 00:11:09.454 System: DUP 8.00MiB 00:11:09.454 SSD detected: yes 00:11:09.454 Zoned device: no 00:11:09.454 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:09.454 Checksum: crc32c 00:11:09.454 Number of devices: 1 00:11:09.454 Devices: 00:11:09.454 ID SIZE PATH 00:11:09.454 1 510.00MiB /dev/nvme0n1p1 00:11:09.454 00:11:09.454 04:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:09.454 04:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2244771 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.713 00:11:09.713 real 0m1.159s 00:11:09.713 user 0m0.013s 00:11:09.713 sys 0m0.110s 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:09.713 ************************************ 00:11:09.713 END TEST filesystem_in_capsule_btrfs 00:11:09.713 ************************************ 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.713 ************************************ 00:11:09.713 START TEST filesystem_in_capsule_xfs 00:11:09.713 ************************************ 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:09.713 04:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:09.713 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:09.713 = sectsz=512 attr=2, projid32bit=1 00:11:09.713 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:09.713 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:09.713 data = bsize=4096 blocks=130560, imaxpct=25 00:11:09.713 = sunit=0 swidth=0 blks 00:11:09.713 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:09.713 log =internal log bsize=4096 blocks=16384, version=2 00:11:09.714 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:09.714 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:10.761 Discarding blocks...Done. 00:11:10.761 04:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:10.761 04:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:13.293 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:13.293 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:13.293 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:13.293 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:13.293 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:13.293 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:13.293 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2244771 00:11:13.293 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:13.293 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:13.294 00:11:13.294 real 0m3.202s 00:11:13.294 user 0m0.018s 00:11:13.294 sys 0m0.064s 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:13.294 ************************************ 00:11:13.294 END TEST filesystem_in_capsule_xfs 00:11:13.294 ************************************ 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2244771 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2244771 ']' 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2244771 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2244771 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2244771' 00:11:13.294 killing process with pid 2244771 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2244771 00:11:13.294 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2244771 00:11:13.553 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:13.553 00:11:13.553 real 0m19.048s 00:11:13.553 user 1m13.818s 00:11:13.553 sys 0m2.279s 00:11:13.553 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.553 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.553 ************************************ 00:11:13.553 END TEST nvmf_filesystem_in_capsule 00:11:13.553 ************************************ 00:11:13.553 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:13.553 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:13.553 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:13.553 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:13.553 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:13.553 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:13.553 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:13.553 rmmod nvme_tcp 00:11:13.813 rmmod nvme_fabrics 00:11:13.813 rmmod nvme_keyring 00:11:13.813 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:13.813 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:13.813 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:13.813 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:13.813 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:13.813 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:13.813 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:13.813 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:13.813 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:11:13.813 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:13.813 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:11:13.813 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:13.813 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:13.813 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.813 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.813 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.719 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:15.719 00:11:15.719 real 0m40.714s 00:11:15.719 user 2m19.668s 00:11:15.719 sys 0m6.196s 00:11:15.719 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:15.719 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:15.719 ************************************ 00:11:15.719 END TEST nvmf_filesystem 00:11:15.719 ************************************ 00:11:15.719 04:47:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:15.719 04:47:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:15.719 04:47:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:15.719 04:47:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:15.719 ************************************ 00:11:15.719 START TEST nvmf_target_discovery 00:11:15.719 ************************************ 00:11:15.719 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:15.979 * Looking for test storage... 00:11:15.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1689 -- # lcov --version 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:15.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.979 --rc genhtml_branch_coverage=1 00:11:15.979 --rc genhtml_function_coverage=1 00:11:15.979 --rc genhtml_legend=1 00:11:15.979 --rc geninfo_all_blocks=1 00:11:15.979 --rc geninfo_unexecuted_blocks=1 00:11:15.979 00:11:15.979 ' 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:15.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.979 --rc genhtml_branch_coverage=1 00:11:15.979 --rc genhtml_function_coverage=1 00:11:15.979 --rc genhtml_legend=1 00:11:15.979 --rc geninfo_all_blocks=1 00:11:15.979 --rc geninfo_unexecuted_blocks=1 00:11:15.979 00:11:15.979 ' 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:15.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.979 --rc genhtml_branch_coverage=1 00:11:15.979 --rc genhtml_function_coverage=1 00:11:15.979 --rc genhtml_legend=1 00:11:15.979 --rc geninfo_all_blocks=1 00:11:15.979 --rc geninfo_unexecuted_blocks=1 00:11:15.979 00:11:15.979 ' 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:15.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.979 --rc genhtml_branch_coverage=1 00:11:15.979 --rc genhtml_function_coverage=1 00:11:15.979 --rc genhtml_legend=1 00:11:15.979 --rc geninfo_all_blocks=1 00:11:15.979 --rc geninfo_unexecuted_blocks=1 00:11:15.979 00:11:15.979 ' 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:15.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:15.979 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:17.884 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:17.884 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:17.884 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:17.884 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:17.884 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.885 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:18.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:11:18.145 00:11:18.145 --- 10.0.0.2 ping statistics --- 00:11:18.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.145 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:11:18.145 00:11:18.145 --- 10.0.0.1 ping statistics --- 00:11:18.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.145 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=2249136 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 2249136 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2249136 ']' 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:18.145 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.145 [2024-10-28 04:47:08.655220] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:11:18.145 [2024-10-28 04:47:08.655323] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.403 [2024-10-28 04:47:08.796744] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:18.403 [2024-10-28 04:47:08.839736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.403 [2024-10-28 04:47:08.891844] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.403 [2024-10-28 04:47:08.891918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.403 [2024-10-28 04:47:08.891935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.403 [2024-10-28 04:47:08.891950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.403 [2024-10-28 04:47:08.891961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.403 [2024-10-28 04:47:08.893725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.403 [2024-10-28 04:47:08.893779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.403 [2024-10-28 04:47:08.893832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.403 [2024-10-28 04:47:08.893835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.333 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:19.333 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:19.333 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:19.333 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:19.333 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.333 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.333 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:19.333 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.333 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.333 [2024-10-28 04:47:09.694836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.333 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.333 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:19.333 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.333 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:19.333 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.333 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.333 Null1 00:11:19.333 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 [2024-10-28 04:47:09.735054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 Null2 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 Null3 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 Null4 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 04:47:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:19.591 00:11:19.591 Discovery Log Number of Records 6, Generation counter 6 00:11:19.591 =====Discovery Log Entry 0====== 00:11:19.591 trtype: tcp 00:11:19.591 adrfam: ipv4 00:11:19.591 subtype: current discovery subsystem 00:11:19.591 treq: not required 00:11:19.591 portid: 0 00:11:19.591 trsvcid: 4420 00:11:19.591 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:19.591 traddr: 10.0.0.2 00:11:19.591 eflags: explicit discovery connections, duplicate discovery information 00:11:19.591 sectype: none 00:11:19.591 =====Discovery Log Entry 1====== 00:11:19.591 trtype: tcp 00:11:19.591 adrfam: ipv4 00:11:19.591 subtype: nvme subsystem 00:11:19.591 treq: not required 00:11:19.591 portid: 0 00:11:19.591 trsvcid: 4420 00:11:19.591 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:19.591 traddr: 10.0.0.2 00:11:19.591 eflags: none 00:11:19.591 sectype: none 00:11:19.591 =====Discovery Log Entry 2====== 00:11:19.591 trtype: tcp 00:11:19.591 adrfam: ipv4 00:11:19.591 subtype: nvme subsystem 00:11:19.591 treq: not required 00:11:19.591 portid: 0 00:11:19.591 trsvcid: 4420 00:11:19.591 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:19.591 traddr: 10.0.0.2 00:11:19.591 eflags: none 00:11:19.591 sectype: none 00:11:19.591 =====Discovery Log Entry 3====== 00:11:19.591 trtype: tcp 00:11:19.591 adrfam: ipv4 00:11:19.591 subtype: nvme subsystem 00:11:19.591 treq: not required 00:11:19.591 portid: 0 00:11:19.591 trsvcid: 4420 00:11:19.591 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:19.591 traddr: 10.0.0.2 00:11:19.591 eflags: none 00:11:19.591 sectype: none 00:11:19.591 =====Discovery Log Entry 4====== 00:11:19.591 trtype: tcp 00:11:19.591 adrfam: ipv4 00:11:19.591 subtype: nvme subsystem 00:11:19.591 treq: not required 00:11:19.591 portid: 0 00:11:19.591 trsvcid: 4420 00:11:19.591 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:19.591 traddr: 10.0.0.2 00:11:19.591 eflags: none 00:11:19.591 sectype: none 00:11:19.591 =====Discovery Log Entry 5====== 00:11:19.591 trtype: tcp 00:11:19.591 adrfam: ipv4 00:11:19.591 subtype: discovery subsystem referral 00:11:19.591 treq: not required 00:11:19.591 portid: 0 00:11:19.591 trsvcid: 4430 00:11:19.591 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:19.591 traddr: 10.0.0.2 00:11:19.591 eflags: none 00:11:19.591 sectype: none 00:11:19.591 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:19.591 Perform nvmf subsystem discovery via RPC 00:11:19.591 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:19.591 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.591 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.591 [ 00:11:19.591 { 00:11:19.591 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:19.592 "subtype": "Discovery", 00:11:19.592 "listen_addresses": [ 00:11:19.592 { 00:11:19.592 "trtype": "TCP", 00:11:19.592 "adrfam": "IPv4", 00:11:19.592 "traddr": "10.0.0.2", 00:11:19.592 "trsvcid": "4420" 00:11:19.592 } 00:11:19.592 ], 00:11:19.592 "allow_any_host": true, 00:11:19.592 "hosts": [] 00:11:19.592 }, 00:11:19.592 { 00:11:19.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.592 "subtype": "NVMe", 00:11:19.592 "listen_addresses": [ 00:11:19.592 { 00:11:19.592 "trtype": "TCP", 00:11:19.592 "adrfam": "IPv4", 00:11:19.592 "traddr": "10.0.0.2", 00:11:19.592 "trsvcid": "4420" 00:11:19.592 } 00:11:19.592 ], 00:11:19.592 "allow_any_host": true, 00:11:19.592 "hosts": [], 00:11:19.592 "serial_number": "SPDK00000000000001", 00:11:19.592 "model_number": "SPDK bdev Controller", 00:11:19.592 "max_namespaces": 32, 00:11:19.592 "min_cntlid": 1, 00:11:19.592 "max_cntlid": 65519, 00:11:19.592 "namespaces": [ 00:11:19.592 { 00:11:19.592 "nsid": 1, 00:11:19.592 "bdev_name": "Null1", 00:11:19.592 "name": "Null1", 00:11:19.592 "nguid": "3B6B275AFE314D0AAAB0F2E1D6206043", 00:11:19.592 "uuid": "3b6b275a-fe31-4d0a-aab0-f2e1d6206043" 00:11:19.592 } 00:11:19.592 ] 00:11:19.592 }, 00:11:19.592 { 00:11:19.592 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:19.592 "subtype": "NVMe", 00:11:19.592 "listen_addresses": [ 00:11:19.592 { 00:11:19.592 "trtype": "TCP", 00:11:19.592 "adrfam": "IPv4", 00:11:19.592 "traddr": "10.0.0.2", 00:11:19.592 "trsvcid": "4420" 00:11:19.592 } 00:11:19.592 ], 00:11:19.592 "allow_any_host": true, 00:11:19.592 "hosts": [], 00:11:19.592 "serial_number": "SPDK00000000000002", 00:11:19.592 "model_number": "SPDK bdev Controller", 00:11:19.592 "max_namespaces": 32, 00:11:19.592 "min_cntlid": 1, 00:11:19.592 "max_cntlid": 65519, 00:11:19.592 "namespaces": [ 00:11:19.592 { 00:11:19.592 "nsid": 1, 00:11:19.592 "bdev_name": "Null2", 00:11:19.592 "name": "Null2", 00:11:19.592 "nguid": "D6EFA9B2820B465F9EB70229F105CB35", 00:11:19.592 "uuid": "d6efa9b2-820b-465f-9eb7-0229f105cb35" 00:11:19.592 } 00:11:19.592 ] 00:11:19.592 }, 00:11:19.592 { 00:11:19.592 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:19.592 "subtype": "NVMe", 00:11:19.592 "listen_addresses": [ 00:11:19.592 { 00:11:19.592 "trtype": "TCP", 00:11:19.592 "adrfam": "IPv4", 00:11:19.592 "traddr": "10.0.0.2", 00:11:19.592 "trsvcid": "4420" 00:11:19.592 } 00:11:19.592 ], 00:11:19.592 "allow_any_host": true, 00:11:19.592 "hosts": [], 00:11:19.592 "serial_number": "SPDK00000000000003", 00:11:19.592 "model_number": "SPDK bdev Controller", 00:11:19.592 "max_namespaces": 32, 00:11:19.592 "min_cntlid": 1, 00:11:19.592 "max_cntlid": 65519, 00:11:19.592 "namespaces": [ 00:11:19.592 { 00:11:19.592 "nsid": 1, 00:11:19.592 "bdev_name": "Null3", 00:11:19.592 "name": "Null3", 00:11:19.592 "nguid": "F8411BFE0D194E0797F0F9F1370277D7", 00:11:19.592 "uuid": "f8411bfe-0d19-4e07-97f0-f9f1370277d7" 00:11:19.592 } 00:11:19.592 ] 00:11:19.592 }, 00:11:19.592 { 00:11:19.592 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:19.592 "subtype": "NVMe", 00:11:19.592 "listen_addresses": [ 00:11:19.592 { 00:11:19.592 "trtype": "TCP", 00:11:19.592 "adrfam": "IPv4", 00:11:19.592 "traddr": "10.0.0.2", 00:11:19.592 "trsvcid": "4420" 00:11:19.592 } 00:11:19.592 ], 00:11:19.592 "allow_any_host": true, 00:11:19.592 "hosts": [], 00:11:19.592 "serial_number": "SPDK00000000000004", 00:11:19.592 "model_number": "SPDK bdev Controller", 00:11:19.592 "max_namespaces": 32, 00:11:19.592 "min_cntlid": 1, 00:11:19.592 "max_cntlid": 65519, 00:11:19.592 "namespaces": [ 00:11:19.592 { 00:11:19.592 "nsid": 1, 00:11:19.592 "bdev_name": "Null4", 00:11:19.592 "name": "Null4", 00:11:19.592 "nguid": "9D053467DBC54675BC42DD397415AD6E", 00:11:19.592 "uuid": "9d053467-dbc5-4675-bc42-dd397415ad6e" 00:11:19.592 } 00:11:19.592 ] 00:11:19.592 } 00:11:19.592 ] 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.592 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.850 rmmod nvme_tcp 00:11:19.850 rmmod nvme_fabrics 00:11:19.850 rmmod nvme_keyring 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 2249136 ']' 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 2249136 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2249136 ']' 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2249136 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2249136 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2249136' 00:11:19.850 killing process with pid 2249136 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2249136 00:11:19.850 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2249136 00:11:20.109 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:20.109 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:20.109 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:20.109 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:20.109 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:11:20.109 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:20.109 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:11:20.109 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.109 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:20.109 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.109 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.109 04:47:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.015 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:22.015 00:11:22.015 real 0m6.245s 00:11:22.015 user 0m7.577s 00:11:22.015 sys 0m1.874s 00:11:22.015 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.015 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.015 ************************************ 00:11:22.015 END TEST nvmf_target_discovery 00:11:22.015 ************************************ 00:11:22.015 04:47:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:22.015 04:47:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:22.015 04:47:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.015 04:47:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:22.015 ************************************ 00:11:22.015 START TEST nvmf_referrals 00:11:22.015 ************************************ 00:11:22.015 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:22.274 * Looking for test storage... 00:11:22.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1689 -- # lcov --version 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.274 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:22.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.275 --rc genhtml_branch_coverage=1 00:11:22.275 --rc genhtml_function_coverage=1 00:11:22.275 --rc genhtml_legend=1 00:11:22.275 --rc geninfo_all_blocks=1 00:11:22.275 --rc geninfo_unexecuted_blocks=1 00:11:22.275 00:11:22.275 ' 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:22.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.275 --rc genhtml_branch_coverage=1 00:11:22.275 --rc genhtml_function_coverage=1 00:11:22.275 --rc genhtml_legend=1 00:11:22.275 --rc geninfo_all_blocks=1 00:11:22.275 --rc geninfo_unexecuted_blocks=1 00:11:22.275 00:11:22.275 ' 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:22.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.275 --rc genhtml_branch_coverage=1 00:11:22.275 --rc genhtml_function_coverage=1 00:11:22.275 --rc genhtml_legend=1 00:11:22.275 --rc geninfo_all_blocks=1 00:11:22.275 --rc geninfo_unexecuted_blocks=1 00:11:22.275 00:11:22.275 ' 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:22.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.275 --rc genhtml_branch_coverage=1 00:11:22.275 --rc genhtml_function_coverage=1 00:11:22.275 --rc genhtml_legend=1 00:11:22.275 --rc geninfo_all_blocks=1 00:11:22.275 --rc geninfo_unexecuted_blocks=1 00:11:22.275 00:11:22.275 ' 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:22.275 04:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:24.809 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.809 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:24.810 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:24.810 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:24.810 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.810 04:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:24.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:11:24.810 00:11:24.810 --- 10.0.0.2 ping statistics --- 00:11:24.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.810 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:11:24.810 00:11:24.810 --- 10.0.0.1 ping statistics --- 00:11:24.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.810 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=2251262 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 2251262 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2251262 ']' 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:24.810 04:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.810 [2024-10-28 04:47:15.178402] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:11:24.810 [2024-10-28 04:47:15.178491] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.810 [2024-10-28 04:47:15.318042] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:24.810 [2024-10-28 04:47:15.358693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.069 [2024-10-28 04:47:15.411070] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.069 [2024-10-28 04:47:15.411136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.069 [2024-10-28 04:47:15.411152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.069 [2024-10-28 04:47:15.411165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.069 [2024-10-28 04:47:15.411177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.069 [2024-10-28 04:47:15.412942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.069 [2024-10-28 04:47:15.412999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.069 [2024-10-28 04:47:15.413053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.069 [2024-10-28 04:47:15.413057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.634 [2024-10-28 04:47:16.171035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.634 [2024-10-28 04:47:16.183243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.634 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:25.892 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:26.151 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.409 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:26.409 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:26.409 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:26.409 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:26.409 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:26.409 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:26.409 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:26.409 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:26.409 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:26.409 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:26.409 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:26.409 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:26.409 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:26.409 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:26.409 04:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.667 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:26.925 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.925 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:26.925 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:26.925 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:26.925 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:26.925 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:26.925 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:26.925 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:26.925 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:26.925 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:26.925 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:26.925 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:26.925 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:26.925 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:26.925 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:26.925 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:27.183 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:27.183 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:27.183 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:27.183 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:27.183 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:27.183 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:27.441 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:27.699 rmmod nvme_tcp 00:11:27.699 rmmod nvme_fabrics 00:11:27.699 rmmod nvme_keyring 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 2251262 ']' 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 2251262 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2251262 ']' 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2251262 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2251262 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2251262' 00:11:27.699 killing process with pid 2251262 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2251262 00:11:27.699 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2251262 00:11:27.959 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:27.959 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:27.959 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:27.959 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:27.959 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:11:27.959 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:27.959 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:11:27.959 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.959 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:27.959 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.959 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.959 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.865 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:29.865 00:11:29.865 real 0m7.834s 00:11:29.865 user 0m13.641s 00:11:29.865 sys 0m2.331s 00:11:29.865 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:29.865 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.865 ************************************ 00:11:29.865 END TEST nvmf_referrals 00:11:29.865 ************************************ 00:11:29.865 04:47:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:29.865 04:47:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:29.865 04:47:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:29.865 04:47:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:30.125 ************************************ 00:11:30.125 START TEST nvmf_connect_disconnect 00:11:30.125 ************************************ 00:11:30.125 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:30.125 * Looking for test storage... 00:11:30.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.125 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:30.125 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1689 -- # lcov --version 00:11:30.125 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:30.125 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:30.125 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.125 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.125 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:30.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.126 --rc genhtml_branch_coverage=1 00:11:30.126 --rc genhtml_function_coverage=1 00:11:30.126 --rc genhtml_legend=1 00:11:30.126 --rc geninfo_all_blocks=1 00:11:30.126 --rc geninfo_unexecuted_blocks=1 00:11:30.126 00:11:30.126 ' 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:30.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.126 --rc genhtml_branch_coverage=1 00:11:30.126 --rc genhtml_function_coverage=1 00:11:30.126 --rc genhtml_legend=1 00:11:30.126 --rc geninfo_all_blocks=1 00:11:30.126 --rc geninfo_unexecuted_blocks=1 00:11:30.126 00:11:30.126 ' 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:30.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.126 --rc genhtml_branch_coverage=1 00:11:30.126 --rc genhtml_function_coverage=1 00:11:30.126 --rc genhtml_legend=1 00:11:30.126 --rc geninfo_all_blocks=1 00:11:30.126 --rc geninfo_unexecuted_blocks=1 00:11:30.126 00:11:30.126 ' 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:30.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.126 --rc genhtml_branch_coverage=1 00:11:30.126 --rc genhtml_function_coverage=1 00:11:30.126 --rc genhtml_legend=1 00:11:30.126 --rc geninfo_all_blocks=1 00:11:30.126 --rc geninfo_unexecuted_blocks=1 00:11:30.126 00:11:30.126 ' 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:30.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:30.126 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:30.127 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.127 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.127 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.127 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:30.127 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:30.127 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:30.127 04:47:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:32.660 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:32.660 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.660 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:32.661 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:32.661 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:32.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:11:32.661 00:11:32.661 --- 10.0.0.2 ping statistics --- 00:11:32.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.661 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:11:32.661 00:11:32.661 --- 10.0.0.1 ping statistics --- 00:11:32.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.661 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=2253639 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 2253639 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2253639 ']' 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:32.661 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:32.661 [2024-10-28 04:47:22.928738] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:11:32.661 [2024-10-28 04:47:22.928829] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.661 [2024-10-28 04:47:23.068480] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:32.661 [2024-10-28 04:47:23.105478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.661 [2024-10-28 04:47:23.154111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.661 [2024-10-28 04:47:23.154190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.661 [2024-10-28 04:47:23.154203] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.661 [2024-10-28 04:47:23.154214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.661 [2024-10-28 04:47:23.154223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.661 [2024-10-28 04:47:23.155907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.661 [2024-10-28 04:47:23.155985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.661 [2024-10-28 04:47:23.155988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.661 [2024-10-28 04:47:23.155927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.595 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:33.595 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:33.595 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:33.595 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:33.595 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:33.595 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.595 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:33.595 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.595 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:33.595 [2024-10-28 04:47:23.984292] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.595 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.595 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:33.595 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.595 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:33.595 [2024-10-28 04:47:24.048579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:33.595 04:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:36.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.723 [2024-10-28 04:50:36.931464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15df400 is same with the state(6) to be set 00:14:46.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:26.016 rmmod nvme_tcp 00:15:26.016 rmmod nvme_fabrics 00:15:26.016 rmmod nvme_keyring 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 2253639 ']' 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 2253639 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2253639 ']' 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2253639 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:15:26.016 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:26.017 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2253639 00:15:26.280 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:26.280 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:26.280 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2253639' 00:15:26.280 killing process with pid 2253639 00:15:26.280 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2253639 00:15:26.280 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2253639 00:15:26.280 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:26.280 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:26.280 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:26.280 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:26.280 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:15:26.280 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:26.280 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:15:26.599 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:26.599 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:26.599 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.599 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.599 04:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.528 04:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:28.528 00:15:28.528 real 3m58.435s 00:15:28.528 user 15m8.705s 00:15:28.528 sys 0m35.360s 00:15:28.528 04:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:28.528 04:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:28.528 ************************************ 00:15:28.528 END TEST nvmf_connect_disconnect 00:15:28.528 ************************************ 00:15:28.528 04:51:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:28.528 04:51:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:28.528 04:51:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:28.528 04:51:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:28.528 ************************************ 00:15:28.528 START TEST nvmf_multitarget 00:15:28.528 ************************************ 00:15:28.528 04:51:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:28.528 * Looking for test storage... 00:15:28.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1689 -- # lcov --version 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:15:28.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.528 --rc genhtml_branch_coverage=1 00:15:28.528 --rc genhtml_function_coverage=1 00:15:28.528 --rc genhtml_legend=1 00:15:28.528 --rc geninfo_all_blocks=1 00:15:28.528 --rc geninfo_unexecuted_blocks=1 00:15:28.528 00:15:28.528 ' 00:15:28.528 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:15:28.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.528 --rc genhtml_branch_coverage=1 00:15:28.528 --rc genhtml_function_coverage=1 00:15:28.528 --rc genhtml_legend=1 00:15:28.528 --rc geninfo_all_blocks=1 00:15:28.528 --rc geninfo_unexecuted_blocks=1 00:15:28.528 00:15:28.529 ' 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:15:28.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.529 --rc genhtml_branch_coverage=1 00:15:28.529 --rc genhtml_function_coverage=1 00:15:28.529 --rc genhtml_legend=1 00:15:28.529 --rc geninfo_all_blocks=1 00:15:28.529 --rc geninfo_unexecuted_blocks=1 00:15:28.529 00:15:28.529 ' 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:15:28.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.529 --rc genhtml_branch_coverage=1 00:15:28.529 --rc genhtml_function_coverage=1 00:15:28.529 --rc genhtml_legend=1 00:15:28.529 --rc geninfo_all_blocks=1 00:15:28.529 --rc geninfo_unexecuted_blocks=1 00:15:28.529 00:15:28.529 ' 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:28.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:15:28.529 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:31.064 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:31.064 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:31.064 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:31.064 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:31.064 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:31.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:15:31.065 00:15:31.065 --- 10.0.0.2 ping statistics --- 00:15:31.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.065 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:31.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:15:31.065 00:15:31.065 --- 10.0.0.1 ping statistics --- 00:15:31.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.065 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=2284821 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 2284821 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2284821 ']' 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:31.065 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:31.065 [2024-10-28 04:51:21.349672] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:15:31.065 [2024-10-28 04:51:21.349772] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.065 [2024-10-28 04:51:21.489704] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:31.065 [2024-10-28 04:51:21.531510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:31.065 [2024-10-28 04:51:21.582929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.065 [2024-10-28 04:51:21.582994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.065 [2024-10-28 04:51:21.583010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.065 [2024-10-28 04:51:21.583023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.065 [2024-10-28 04:51:21.583034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.065 [2024-10-28 04:51:21.584774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.065 [2024-10-28 04:51:21.584800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.065 [2024-10-28 04:51:21.584856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.065 [2024-10-28 04:51:21.584859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.324 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.324 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:15:31.324 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:31.324 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:31.324 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:31.324 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.324 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:31.324 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:31.324 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:31.324 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:31.324 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:31.582 "nvmf_tgt_1" 00:15:31.582 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:31.582 "nvmf_tgt_2" 00:15:31.582 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:31.582 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:31.840 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:31.840 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:31.840 true 00:15:31.840 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:31.840 true 00:15:31.840 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:31.840 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:32.097 rmmod nvme_tcp 00:15:32.097 rmmod nvme_fabrics 00:15:32.097 rmmod nvme_keyring 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 2284821 ']' 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 2284821 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2284821 ']' 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2284821 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2284821 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2284821' 00:15:32.097 killing process with pid 2284821 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2284821 00:15:32.097 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2284821 00:15:32.357 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:32.357 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:32.357 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:32.357 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:15:32.357 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:15:32.357 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:32.357 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:15:32.357 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:32.357 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:32.357 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.357 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.357 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.888 04:51:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:34.888 00:15:34.888 real 0m5.923s 00:15:34.888 user 0m6.706s 00:15:34.888 sys 0m1.921s 00:15:34.888 04:51:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:34.889 04:51:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:34.889 ************************************ 00:15:34.889 END TEST nvmf_multitarget 00:15:34.889 ************************************ 00:15:34.889 04:51:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:34.889 04:51:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:34.889 04:51:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.889 04:51:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:34.889 ************************************ 00:15:34.889 START TEST nvmf_rpc 00:15:34.889 ************************************ 00:15:34.889 04:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:34.889 * Looking for test storage... 00:15:34.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:34.889 04:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:15:34.889 04:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:15:34.889 04:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:15:34.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.889 --rc genhtml_branch_coverage=1 00:15:34.889 --rc genhtml_function_coverage=1 00:15:34.889 --rc genhtml_legend=1 00:15:34.889 --rc geninfo_all_blocks=1 00:15:34.889 --rc geninfo_unexecuted_blocks=1 00:15:34.889 00:15:34.889 ' 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:15:34.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.889 --rc genhtml_branch_coverage=1 00:15:34.889 --rc genhtml_function_coverage=1 00:15:34.889 --rc genhtml_legend=1 00:15:34.889 --rc geninfo_all_blocks=1 00:15:34.889 --rc geninfo_unexecuted_blocks=1 00:15:34.889 00:15:34.889 ' 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:15:34.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.889 --rc genhtml_branch_coverage=1 00:15:34.889 --rc genhtml_function_coverage=1 00:15:34.889 --rc genhtml_legend=1 00:15:34.889 --rc geninfo_all_blocks=1 00:15:34.889 --rc geninfo_unexecuted_blocks=1 00:15:34.889 00:15:34.889 ' 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:15:34.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.889 --rc genhtml_branch_coverage=1 00:15:34.889 --rc genhtml_function_coverage=1 00:15:34.889 --rc genhtml_legend=1 00:15:34.889 --rc geninfo_all_blocks=1 00:15:34.889 --rc geninfo_unexecuted_blocks=1 00:15:34.889 00:15:34.889 ' 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:34.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:15:34.889 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:36.789 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:36.789 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:36.789 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:36.789 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.789 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:36.790 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:37.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:15:37.048 00:15:37.048 --- 10.0.0.2 ping statistics --- 00:15:37.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.048 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:37.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:15:37.048 00:15:37.048 --- 10.0.0.1 ping statistics --- 00:15:37.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.048 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=2287028 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 2287028 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2287028 ']' 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:37.048 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.048 [2024-10-28 04:51:27.493310] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:15:37.048 [2024-10-28 04:51:27.493405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.048 [2024-10-28 04:51:27.641616] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:37.306 [2024-10-28 04:51:27.683724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:37.306 [2024-10-28 04:51:27.736976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.307 [2024-10-28 04:51:27.737046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.307 [2024-10-28 04:51:27.737062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.307 [2024-10-28 04:51:27.737076] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.307 [2024-10-28 04:51:27.737088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.307 [2024-10-28 04:51:27.738847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.307 [2024-10-28 04:51:27.738881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.307 [2024-10-28 04:51:27.738934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.307 [2024-10-28 04:51:27.738938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.307 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:37.307 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:37.307 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:37.307 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:37.307 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.307 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.307 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:37.307 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.307 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.307 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.307 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:37.307 "tick_rate": 2693500000, 00:15:37.307 "poll_groups": [ 00:15:37.307 { 00:15:37.307 "name": "nvmf_tgt_poll_group_000", 00:15:37.307 "admin_qpairs": 0, 00:15:37.307 "io_qpairs": 0, 00:15:37.307 "current_admin_qpairs": 0, 00:15:37.307 "current_io_qpairs": 0, 00:15:37.307 "pending_bdev_io": 0, 00:15:37.307 "completed_nvme_io": 0, 00:15:37.307 "transports": [] 00:15:37.307 }, 00:15:37.307 { 00:15:37.307 "name": "nvmf_tgt_poll_group_001", 00:15:37.307 "admin_qpairs": 0, 00:15:37.307 "io_qpairs": 0, 00:15:37.307 "current_admin_qpairs": 0, 00:15:37.307 "current_io_qpairs": 0, 00:15:37.307 "pending_bdev_io": 0, 00:15:37.307 "completed_nvme_io": 0, 00:15:37.307 "transports": [] 00:15:37.307 }, 00:15:37.307 { 00:15:37.307 "name": "nvmf_tgt_poll_group_002", 00:15:37.307 "admin_qpairs": 0, 00:15:37.307 "io_qpairs": 0, 00:15:37.307 "current_admin_qpairs": 0, 00:15:37.307 "current_io_qpairs": 0, 00:15:37.307 "pending_bdev_io": 0, 00:15:37.307 "completed_nvme_io": 0, 00:15:37.307 "transports": [] 00:15:37.307 }, 00:15:37.307 { 00:15:37.307 "name": "nvmf_tgt_poll_group_003", 00:15:37.307 "admin_qpairs": 0, 00:15:37.307 "io_qpairs": 0, 00:15:37.307 "current_admin_qpairs": 0, 00:15:37.307 "current_io_qpairs": 0, 00:15:37.307 "pending_bdev_io": 0, 00:15:37.307 "completed_nvme_io": 0, 00:15:37.307 "transports": [] 00:15:37.307 } 00:15:37.307 ] 00:15:37.307 }' 00:15:37.307 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:37.307 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:37.307 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:37.307 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:37.566 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:37.566 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:37.566 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:37.566 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:37.566 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.566 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.566 [2024-10-28 04:51:27.959997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.566 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.566 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:37.566 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.566 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.566 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.566 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:37.566 "tick_rate": 2693500000, 00:15:37.566 "poll_groups": [ 00:15:37.566 { 00:15:37.566 "name": "nvmf_tgt_poll_group_000", 00:15:37.566 "admin_qpairs": 0, 00:15:37.566 "io_qpairs": 0, 00:15:37.566 "current_admin_qpairs": 0, 00:15:37.566 "current_io_qpairs": 0, 00:15:37.566 "pending_bdev_io": 0, 00:15:37.566 "completed_nvme_io": 0, 00:15:37.566 "transports": [ 00:15:37.566 { 00:15:37.566 "trtype": "TCP" 00:15:37.566 } 00:15:37.566 ] 00:15:37.566 }, 00:15:37.566 { 00:15:37.566 "name": "nvmf_tgt_poll_group_001", 00:15:37.566 "admin_qpairs": 0, 00:15:37.566 "io_qpairs": 0, 00:15:37.566 "current_admin_qpairs": 0, 00:15:37.566 "current_io_qpairs": 0, 00:15:37.566 "pending_bdev_io": 0, 00:15:37.566 "completed_nvme_io": 0, 00:15:37.566 "transports": [ 00:15:37.566 { 00:15:37.566 "trtype": "TCP" 00:15:37.566 } 00:15:37.566 ] 00:15:37.566 }, 00:15:37.566 { 00:15:37.566 "name": "nvmf_tgt_poll_group_002", 00:15:37.566 "admin_qpairs": 0, 00:15:37.566 "io_qpairs": 0, 00:15:37.566 "current_admin_qpairs": 0, 00:15:37.566 "current_io_qpairs": 0, 00:15:37.566 "pending_bdev_io": 0, 00:15:37.566 "completed_nvme_io": 0, 00:15:37.566 "transports": [ 00:15:37.566 { 00:15:37.566 "trtype": "TCP" 00:15:37.566 } 00:15:37.566 ] 00:15:37.566 }, 00:15:37.566 { 00:15:37.566 "name": "nvmf_tgt_poll_group_003", 00:15:37.566 "admin_qpairs": 0, 00:15:37.566 "io_qpairs": 0, 00:15:37.566 "current_admin_qpairs": 0, 00:15:37.566 "current_io_qpairs": 0, 00:15:37.566 "pending_bdev_io": 0, 00:15:37.566 "completed_nvme_io": 0, 00:15:37.566 "transports": [ 00:15:37.566 { 00:15:37.566 "trtype": "TCP" 00:15:37.566 } 00:15:37.566 ] 00:15:37.566 } 00:15:37.566 ] 00:15:37.566 }' 00:15:37.566 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:37.566 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:37.566 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:37.566 04:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.566 Malloc1 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.566 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.567 [2024-10-28 04:51:28.111363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:37.567 [2024-10-28 04:51:28.133884] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:37.567 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:37.567 could not add new controller: failed to write to nvme-fabrics device 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:37.567 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:37.825 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:37.825 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.825 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.825 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.825 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.825 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:38.391 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:38.391 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:38.391 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.391 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:38.391 04:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:40.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:40.287 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:40.546 [2024-10-28 04:51:30.899681] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:40.546 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:40.546 could not add new controller: failed to write to nvme-fabrics device 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.546 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:41.113 04:51:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:41.113 04:51:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:41.113 04:51:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:41.113 04:51:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:41.113 04:51:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:43.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.642 [2024-10-28 04:51:33.772212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.642 04:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:43.900 04:51:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:43.900 04:51:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:43.900 04:51:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:43.900 04:51:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:43.900 04:51:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:46.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.428 [2024-10-28 04:51:36.635606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.428 04:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:46.687 04:51:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:46.687 04:51:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:46.687 04:51:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:46.687 04:51:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:46.687 04:51:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:49.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.213 [2024-10-28 04:51:39.414550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.213 04:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:49.470 04:51:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:49.470 04:51:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:49.470 04:51:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:49.470 04:51:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:49.470 04:51:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:51.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.998 [2024-10-28 04:51:42.206935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.998 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:52.563 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:52.563 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:52.563 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:52.563 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:52.563 04:51:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:54.460 04:51:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:54.460 04:51:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:54.460 04:51:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:54.460 04:51:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:54.460 04:51:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:54.460 04:51:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:54.460 04:51:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:54.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.460 04:51:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:54.460 04:51:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:54.460 04:51:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:54.460 04:51:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:54.460 04:51:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:54.460 04:51:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:54.460 04:51:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:54.460 04:51:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.460 [2024-10-28 04:51:45.029609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.460 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:55.395 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:55.395 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:55.395 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:55.395 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:55.395 04:51:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:57.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.295 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.296 [2024-10-28 04:51:47.851786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.296 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.554 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.554 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.554 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.554 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.554 [2024-10-28 04:51:47.899735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.554 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.554 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.554 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.554 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.554 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.554 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:57.554 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.554 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 [2024-10-28 04:51:47.947800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 [2024-10-28 04:51:47.995821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 [2024-10-28 04:51:48.043879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:57.555 "tick_rate": 2693500000, 00:15:57.555 "poll_groups": [ 00:15:57.555 { 00:15:57.555 "name": "nvmf_tgt_poll_group_000", 00:15:57.555 "admin_qpairs": 2, 00:15:57.555 "io_qpairs": 84, 00:15:57.555 "current_admin_qpairs": 0, 00:15:57.555 "current_io_qpairs": 0, 00:15:57.555 "pending_bdev_io": 0, 00:15:57.555 "completed_nvme_io": 136, 00:15:57.555 "transports": [ 00:15:57.555 { 00:15:57.555 "trtype": "TCP" 00:15:57.555 } 00:15:57.555 ] 00:15:57.555 }, 00:15:57.555 { 00:15:57.555 "name": "nvmf_tgt_poll_group_001", 00:15:57.555 "admin_qpairs": 2, 00:15:57.555 "io_qpairs": 84, 00:15:57.555 "current_admin_qpairs": 0, 00:15:57.555 "current_io_qpairs": 0, 00:15:57.555 "pending_bdev_io": 0, 00:15:57.555 "completed_nvme_io": 133, 00:15:57.555 "transports": [ 00:15:57.555 { 00:15:57.555 "trtype": "TCP" 00:15:57.555 } 00:15:57.555 ] 00:15:57.555 }, 00:15:57.555 { 00:15:57.555 "name": "nvmf_tgt_poll_group_002", 00:15:57.555 "admin_qpairs": 1, 00:15:57.555 "io_qpairs": 84, 00:15:57.555 "current_admin_qpairs": 0, 00:15:57.555 "current_io_qpairs": 0, 00:15:57.555 "pending_bdev_io": 0, 00:15:57.555 "completed_nvme_io": 184, 00:15:57.555 "transports": [ 00:15:57.555 { 00:15:57.555 "trtype": "TCP" 00:15:57.555 } 00:15:57.555 ] 00:15:57.555 }, 00:15:57.555 { 00:15:57.555 "name": "nvmf_tgt_poll_group_003", 00:15:57.555 "admin_qpairs": 2, 00:15:57.555 "io_qpairs": 84, 00:15:57.555 "current_admin_qpairs": 0, 00:15:57.555 "current_io_qpairs": 0, 00:15:57.555 "pending_bdev_io": 0, 00:15:57.555 "completed_nvme_io": 233, 00:15:57.555 "transports": [ 00:15:57.555 { 00:15:57.555 "trtype": "TCP" 00:15:57.555 } 00:15:57.555 ] 00:15:57.555 } 00:15:57.555 ] 00:15:57.555 }' 00:15:57.555 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:57.556 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:57.556 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:57.556 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:57.556 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:57.556 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:57.556 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:57.556 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:57.556 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:57.814 rmmod nvme_tcp 00:15:57.814 rmmod nvme_fabrics 00:15:57.814 rmmod nvme_keyring 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 2287028 ']' 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 2287028 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2287028 ']' 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2287028 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2287028 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2287028' 00:15:57.814 killing process with pid 2287028 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2287028 00:15:57.814 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2287028 00:15:58.072 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:58.072 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:58.072 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:58.072 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:15:58.072 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:15:58.072 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:58.072 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:15:58.072 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:58.072 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:58.072 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.072 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:58.072 04:51:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.061 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:00.061 00:16:00.061 real 0m25.656s 00:16:00.061 user 1m22.531s 00:16:00.061 sys 0m4.345s 00:16:00.061 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:00.061 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:00.061 ************************************ 00:16:00.061 END TEST nvmf_rpc 00:16:00.061 ************************************ 00:16:00.061 04:51:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:00.061 04:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:00.061 04:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:00.061 04:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:00.061 ************************************ 00:16:00.061 START TEST nvmf_invalid 00:16:00.061 ************************************ 00:16:00.061 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:00.320 * Looking for test storage... 00:16:00.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1689 -- # lcov --version 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:16:00.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.320 --rc genhtml_branch_coverage=1 00:16:00.320 --rc genhtml_function_coverage=1 00:16:00.320 --rc genhtml_legend=1 00:16:00.320 --rc geninfo_all_blocks=1 00:16:00.320 --rc geninfo_unexecuted_blocks=1 00:16:00.320 00:16:00.320 ' 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:16:00.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.320 --rc genhtml_branch_coverage=1 00:16:00.320 --rc genhtml_function_coverage=1 00:16:00.320 --rc genhtml_legend=1 00:16:00.320 --rc geninfo_all_blocks=1 00:16:00.320 --rc geninfo_unexecuted_blocks=1 00:16:00.320 00:16:00.320 ' 00:16:00.320 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:16:00.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.321 --rc genhtml_branch_coverage=1 00:16:00.321 --rc genhtml_function_coverage=1 00:16:00.321 --rc genhtml_legend=1 00:16:00.321 --rc geninfo_all_blocks=1 00:16:00.321 --rc geninfo_unexecuted_blocks=1 00:16:00.321 00:16:00.321 ' 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:16:00.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.321 --rc genhtml_branch_coverage=1 00:16:00.321 --rc genhtml_function_coverage=1 00:16:00.321 --rc genhtml_legend=1 00:16:00.321 --rc geninfo_all_blocks=1 00:16:00.321 --rc geninfo_unexecuted_blocks=1 00:16:00.321 00:16:00.321 ' 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:00.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:00.321 04:51:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:02.222 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:02.222 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:02.222 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:02.222 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:02.222 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:02.482 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:02.482 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:02.482 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:02.483 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:02.483 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:02.483 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:02.483 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:02.483 04:51:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:02.483 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:02.483 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:02.483 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:02.483 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:02.483 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:02.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:16:02.743 00:16:02.743 --- 10.0.0.2 ping statistics --- 00:16:02.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.743 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:02.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:16:02.743 00:16:02.743 --- 10.0.0.1 ping statistics --- 00:16:02.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.743 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=2291458 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 2291458 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2291458 ']' 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:02.743 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:02.743 [2024-10-28 04:51:53.181685] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:16:02.743 [2024-10-28 04:51:53.181786] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.743 [2024-10-28 04:51:53.328596] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:03.001 [2024-10-28 04:51:53.370364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.001 [2024-10-28 04:51:53.424190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.001 [2024-10-28 04:51:53.424262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.001 [2024-10-28 04:51:53.424278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.002 [2024-10-28 04:51:53.424292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.002 [2024-10-28 04:51:53.424304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.002 [2024-10-28 04:51:53.426057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.002 [2024-10-28 04:51:53.426113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.002 [2024-10-28 04:51:53.426168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.002 [2024-10-28 04:51:53.426171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.002 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:03.002 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:16:03.002 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:03.002 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:03.002 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:03.002 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.002 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:03.002 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17629 00:16:03.260 [2024-10-28 04:51:53.850646] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:03.518 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:03.518 { 00:16:03.518 "nqn": "nqn.2016-06.io.spdk:cnode17629", 00:16:03.518 "tgt_name": "foobar", 00:16:03.518 "method": "nvmf_create_subsystem", 00:16:03.518 "req_id": 1 00:16:03.518 } 00:16:03.518 Got JSON-RPC error response 00:16:03.518 response: 00:16:03.518 { 00:16:03.518 "code": -32603, 00:16:03.518 "message": "Unable to find target foobar" 00:16:03.518 }' 00:16:03.518 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:03.518 { 00:16:03.518 "nqn": "nqn.2016-06.io.spdk:cnode17629", 00:16:03.518 "tgt_name": "foobar", 00:16:03.518 "method": "nvmf_create_subsystem", 00:16:03.518 "req_id": 1 00:16:03.518 } 00:16:03.518 Got JSON-RPC error response 00:16:03.518 response: 00:16:03.518 { 00:16:03.518 "code": -32603, 00:16:03.518 "message": "Unable to find target foobar" 00:16:03.518 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:03.518 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:03.518 04:51:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32612 00:16:03.775 [2024-10-28 04:51:54.130925] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32612: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:03.775 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:03.775 { 00:16:03.775 "nqn": "nqn.2016-06.io.spdk:cnode32612", 00:16:03.775 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:03.775 "method": "nvmf_create_subsystem", 00:16:03.775 "req_id": 1 00:16:03.775 } 00:16:03.775 Got JSON-RPC error response 00:16:03.775 response: 00:16:03.775 { 00:16:03.775 "code": -32602, 00:16:03.775 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:03.775 }' 00:16:03.775 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:03.775 { 00:16:03.775 "nqn": "nqn.2016-06.io.spdk:cnode32612", 00:16:03.775 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:03.775 "method": "nvmf_create_subsystem", 00:16:03.775 "req_id": 1 00:16:03.775 } 00:16:03.775 Got JSON-RPC error response 00:16:03.775 response: 00:16:03.775 { 00:16:03.775 "code": -32602, 00:16:03.775 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:03.775 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:03.775 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:03.775 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode8726 00:16:04.034 [2024-10-28 04:51:54.411170] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8726: invalid model number 'SPDK_Controller' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:04.034 { 00:16:04.034 "nqn": "nqn.2016-06.io.spdk:cnode8726", 00:16:04.034 "model_number": "SPDK_Controller\u001f", 00:16:04.034 "method": "nvmf_create_subsystem", 00:16:04.034 "req_id": 1 00:16:04.034 } 00:16:04.034 Got JSON-RPC error response 00:16:04.034 response: 00:16:04.034 { 00:16:04.034 "code": -32602, 00:16:04.034 "message": "Invalid MN SPDK_Controller\u001f" 00:16:04.034 }' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:04.034 { 00:16:04.034 "nqn": "nqn.2016-06.io.spdk:cnode8726", 00:16:04.034 "model_number": "SPDK_Controller\u001f", 00:16:04.034 "method": "nvmf_create_subsystem", 00:16:04.034 "req_id": 1 00:16:04.034 } 00:16:04.034 Got JSON-RPC error response 00:16:04.034 response: 00:16:04.034 { 00:16:04.034 "code": -32602, 00:16:04.034 "message": "Invalid MN SPDK_Controller\u001f" 00:16:04.034 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:04.034 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ I == \- ]] 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'IdPG->+m5RaGiB? :4HJf' 00:16:04.035 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'IdPG->+m5RaGiB? :4HJf' nqn.2016-06.io.spdk:cnode30655 00:16:04.294 [2024-10-28 04:51:54.791470] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30655: invalid serial number 'IdPG->+m5RaGiB? :4HJf' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:04.294 { 00:16:04.294 "nqn": "nqn.2016-06.io.spdk:cnode30655", 00:16:04.294 "serial_number": "IdPG->+m5RaGiB? :4HJf", 00:16:04.294 "method": "nvmf_create_subsystem", 00:16:04.294 "req_id": 1 00:16:04.294 } 00:16:04.294 Got JSON-RPC error response 00:16:04.294 response: 00:16:04.294 { 00:16:04.294 "code": -32602, 00:16:04.294 "message": "Invalid SN IdPG->+m5RaGiB? :4HJf" 00:16:04.294 }' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:04.294 { 00:16:04.294 "nqn": "nqn.2016-06.io.spdk:cnode30655", 00:16:04.294 "serial_number": "IdPG->+m5RaGiB? :4HJf", 00:16:04.294 "method": "nvmf_create_subsystem", 00:16:04.294 "req_id": 1 00:16:04.294 } 00:16:04.294 Got JSON-RPC error response 00:16:04.294 response: 00:16:04.294 { 00:16:04.294 "code": -32602, 00:16:04.294 "message": "Invalid SN IdPG->+m5RaGiB? :4HJf" 00:16:04.294 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.294 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:04.553 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ']K(jO?*Vi3hy1'\''8E?3zKK:Dh"of/&[WoH_"z74)^U' 00:16:04.554 04:51:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ']K(jO?*Vi3hy1'\''8E?3zKK:Dh"of/&[WoH_"z74)^U' nqn.2016-06.io.spdk:cnode21014 00:16:04.812 [2024-10-28 04:51:55.219884] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21014: invalid model number ']K(jO?*Vi3hy1'8E?3zKK:Dh"of/&[WoH_"z74)^U' 00:16:04.812 04:51:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:04.812 { 00:16:04.812 "nqn": "nqn.2016-06.io.spdk:cnode21014", 00:16:04.812 "model_number": "]K(jO?*Vi3hy1'\''8E?3zKK:Dh\"of/&[WoH_\"z74)^U", 00:16:04.812 "method": "nvmf_create_subsystem", 00:16:04.812 "req_id": 1 00:16:04.812 } 00:16:04.812 Got JSON-RPC error response 00:16:04.812 response: 00:16:04.812 { 00:16:04.812 "code": -32602, 00:16:04.812 "message": "Invalid MN ]K(jO?*Vi3hy1'\''8E?3zKK:Dh\"of/&[WoH_\"z74)^U" 00:16:04.812 }' 00:16:04.812 04:51:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:04.812 { 00:16:04.812 "nqn": "nqn.2016-06.io.spdk:cnode21014", 00:16:04.812 "model_number": "]K(jO?*Vi3hy1'8E?3zKK:Dh\"of/&[WoH_\"z74)^U", 00:16:04.812 "method": "nvmf_create_subsystem", 00:16:04.812 "req_id": 1 00:16:04.812 } 00:16:04.812 Got JSON-RPC error response 00:16:04.812 response: 00:16:04.812 { 00:16:04.812 "code": -32602, 00:16:04.812 "message": "Invalid MN ]K(jO?*Vi3hy1'8E?3zKK:Dh\"of/&[WoH_\"z74)^U" 00:16:04.812 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:04.812 04:51:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:05.069 [2024-10-28 04:51:55.496205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.069 04:51:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:05.327 04:51:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:05.327 04:51:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:05.327 04:51:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:05.327 04:51:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:05.327 04:51:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:05.584 [2024-10-28 04:51:56.040686] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:05.584 04:51:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:05.584 { 00:16:05.584 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:05.584 "listen_address": { 00:16:05.584 "trtype": "tcp", 00:16:05.584 "traddr": "", 00:16:05.584 "trsvcid": "4421" 00:16:05.584 }, 00:16:05.584 "method": "nvmf_subsystem_remove_listener", 00:16:05.584 "req_id": 1 00:16:05.584 } 00:16:05.584 Got JSON-RPC error response 00:16:05.584 response: 00:16:05.584 { 00:16:05.584 "code": -32602, 00:16:05.584 "message": "Invalid parameters" 00:16:05.584 }' 00:16:05.584 04:51:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:05.584 { 00:16:05.584 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:05.584 "listen_address": { 00:16:05.584 "trtype": "tcp", 00:16:05.584 "traddr": "", 00:16:05.584 "trsvcid": "4421" 00:16:05.584 }, 00:16:05.584 "method": "nvmf_subsystem_remove_listener", 00:16:05.584 "req_id": 1 00:16:05.584 } 00:16:05.584 Got JSON-RPC error response 00:16:05.584 response: 00:16:05.584 { 00:16:05.584 "code": -32602, 00:16:05.584 "message": "Invalid parameters" 00:16:05.584 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:05.584 04:51:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23256 -i 0 00:16:05.841 [2024-10-28 04:51:56.312908] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23256: invalid cntlid range [0-65519] 00:16:05.841 04:51:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:05.841 { 00:16:05.841 "nqn": "nqn.2016-06.io.spdk:cnode23256", 00:16:05.841 "min_cntlid": 0, 00:16:05.841 "method": "nvmf_create_subsystem", 00:16:05.841 "req_id": 1 00:16:05.841 } 00:16:05.841 Got JSON-RPC error response 00:16:05.841 response: 00:16:05.841 { 00:16:05.841 "code": -32602, 00:16:05.841 "message": "Invalid cntlid range [0-65519]" 00:16:05.841 }' 00:16:05.841 04:51:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:05.841 { 00:16:05.841 "nqn": "nqn.2016-06.io.spdk:cnode23256", 00:16:05.841 "min_cntlid": 0, 00:16:05.841 "method": "nvmf_create_subsystem", 00:16:05.841 "req_id": 1 00:16:05.841 } 00:16:05.841 Got JSON-RPC error response 00:16:05.841 response: 00:16:05.841 { 00:16:05.841 "code": -32602, 00:16:05.842 "message": "Invalid cntlid range [0-65519]" 00:16:05.842 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:05.842 04:51:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28678 -i 65520 00:16:06.098 [2024-10-28 04:51:56.589160] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28678: invalid cntlid range [65520-65519] 00:16:06.098 04:51:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:06.098 { 00:16:06.098 "nqn": "nqn.2016-06.io.spdk:cnode28678", 00:16:06.098 "min_cntlid": 65520, 00:16:06.098 "method": "nvmf_create_subsystem", 00:16:06.098 "req_id": 1 00:16:06.098 } 00:16:06.098 Got JSON-RPC error response 00:16:06.098 response: 00:16:06.098 { 00:16:06.098 "code": -32602, 00:16:06.098 "message": "Invalid cntlid range [65520-65519]" 00:16:06.098 }' 00:16:06.098 04:51:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:06.098 { 00:16:06.098 "nqn": "nqn.2016-06.io.spdk:cnode28678", 00:16:06.098 "min_cntlid": 65520, 00:16:06.098 "method": "nvmf_create_subsystem", 00:16:06.098 "req_id": 1 00:16:06.098 } 00:16:06.098 Got JSON-RPC error response 00:16:06.098 response: 00:16:06.098 { 00:16:06.098 "code": -32602, 00:16:06.098 "message": "Invalid cntlid range [65520-65519]" 00:16:06.098 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:06.098 04:51:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22430 -I 0 00:16:06.356 [2024-10-28 04:51:56.869377] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22430: invalid cntlid range [1-0] 00:16:06.356 04:51:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:06.356 { 00:16:06.356 "nqn": "nqn.2016-06.io.spdk:cnode22430", 00:16:06.356 "max_cntlid": 0, 00:16:06.356 "method": "nvmf_create_subsystem", 00:16:06.356 "req_id": 1 00:16:06.356 } 00:16:06.356 Got JSON-RPC error response 00:16:06.356 response: 00:16:06.356 { 00:16:06.356 "code": -32602, 00:16:06.356 "message": "Invalid cntlid range [1-0]" 00:16:06.356 }' 00:16:06.356 04:51:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:06.356 { 00:16:06.356 "nqn": "nqn.2016-06.io.spdk:cnode22430", 00:16:06.356 "max_cntlid": 0, 00:16:06.356 "method": "nvmf_create_subsystem", 00:16:06.356 "req_id": 1 00:16:06.356 } 00:16:06.356 Got JSON-RPC error response 00:16:06.356 response: 00:16:06.356 { 00:16:06.356 "code": -32602, 00:16:06.356 "message": "Invalid cntlid range [1-0]" 00:16:06.356 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:06.356 04:51:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14641 -I 65520 00:16:06.613 [2024-10-28 04:51:57.161651] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14641: invalid cntlid range [1-65520] 00:16:06.613 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:06.613 { 00:16:06.613 "nqn": "nqn.2016-06.io.spdk:cnode14641", 00:16:06.613 "max_cntlid": 65520, 00:16:06.613 "method": "nvmf_create_subsystem", 00:16:06.614 "req_id": 1 00:16:06.614 } 00:16:06.614 Got JSON-RPC error response 00:16:06.614 response: 00:16:06.614 { 00:16:06.614 "code": -32602, 00:16:06.614 "message": "Invalid cntlid range [1-65520]" 00:16:06.614 }' 00:16:06.614 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:06.614 { 00:16:06.614 "nqn": "nqn.2016-06.io.spdk:cnode14641", 00:16:06.614 "max_cntlid": 65520, 00:16:06.614 "method": "nvmf_create_subsystem", 00:16:06.614 "req_id": 1 00:16:06.614 } 00:16:06.614 Got JSON-RPC error response 00:16:06.614 response: 00:16:06.614 { 00:16:06.614 "code": -32602, 00:16:06.614 "message": "Invalid cntlid range [1-65520]" 00:16:06.614 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:06.614 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17560 -i 6 -I 5 00:16:06.871 [2024-10-28 04:51:57.434028] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17560: invalid cntlid range [6-5] 00:16:06.871 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:06.871 { 00:16:06.871 "nqn": "nqn.2016-06.io.spdk:cnode17560", 00:16:06.871 "min_cntlid": 6, 00:16:06.871 "max_cntlid": 5, 00:16:06.871 "method": "nvmf_create_subsystem", 00:16:06.871 "req_id": 1 00:16:06.871 } 00:16:06.871 Got JSON-RPC error response 00:16:06.871 response: 00:16:06.871 { 00:16:06.871 "code": -32602, 00:16:06.871 "message": "Invalid cntlid range [6-5]" 00:16:06.871 }' 00:16:06.871 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:06.871 { 00:16:06.871 "nqn": "nqn.2016-06.io.spdk:cnode17560", 00:16:06.871 "min_cntlid": 6, 00:16:06.871 "max_cntlid": 5, 00:16:06.871 "method": "nvmf_create_subsystem", 00:16:06.871 "req_id": 1 00:16:06.871 } 00:16:06.871 Got JSON-RPC error response 00:16:06.871 response: 00:16:06.871 { 00:16:06.871 "code": -32602, 00:16:06.871 "message": "Invalid cntlid range [6-5]" 00:16:06.871 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:06.871 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:07.128 { 00:16:07.128 "name": "foobar", 00:16:07.128 "method": "nvmf_delete_target", 00:16:07.128 "req_id": 1 00:16:07.128 } 00:16:07.128 Got JSON-RPC error response 00:16:07.128 response: 00:16:07.128 { 00:16:07.128 "code": -32602, 00:16:07.128 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:07.128 }' 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:07.128 { 00:16:07.128 "name": "foobar", 00:16:07.128 "method": "nvmf_delete_target", 00:16:07.128 "req_id": 1 00:16:07.128 } 00:16:07.128 Got JSON-RPC error response 00:16:07.128 response: 00:16:07.128 { 00:16:07.128 "code": -32602, 00:16:07.128 "message": "The specified target doesn't exist, cannot delete it." 00:16:07.128 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:07.128 rmmod nvme_tcp 00:16:07.128 rmmod nvme_fabrics 00:16:07.128 rmmod nvme_keyring 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 2291458 ']' 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 2291458 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2291458 ']' 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2291458 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2291458 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2291458' 00:16:07.128 killing process with pid 2291458 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2291458 00:16:07.128 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2291458 00:16:07.387 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:07.387 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:07.387 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:07.387 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:07.387 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:16:07.387 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:07.387 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:16:07.387 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:07.387 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:07.387 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.387 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.387 04:51:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.918 04:51:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:09.918 00:16:09.918 real 0m9.302s 00:16:09.918 user 0m21.917s 00:16:09.918 sys 0m2.587s 00:16:09.918 04:51:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:09.918 04:51:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:09.918 ************************************ 00:16:09.918 END TEST nvmf_invalid 00:16:09.918 ************************************ 00:16:09.918 04:51:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:09.918 04:51:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:09.918 04:51:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:09.918 04:51:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:09.918 ************************************ 00:16:09.918 START TEST nvmf_connect_stress 00:16:09.918 ************************************ 00:16:09.918 04:51:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:09.918 * Looking for test storage... 00:16:09.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:09.918 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:16:09.918 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1689 -- # lcov --version 00:16:09.918 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:16:09.918 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:16:09.918 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:09.918 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:09.918 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:09.918 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:09.918 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:09.918 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:09.918 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:09.918 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:09.918 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:09.918 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:09.918 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:09.918 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:16:09.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.919 --rc genhtml_branch_coverage=1 00:16:09.919 --rc genhtml_function_coverage=1 00:16:09.919 --rc genhtml_legend=1 00:16:09.919 --rc geninfo_all_blocks=1 00:16:09.919 --rc geninfo_unexecuted_blocks=1 00:16:09.919 00:16:09.919 ' 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:16:09.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.919 --rc genhtml_branch_coverage=1 00:16:09.919 --rc genhtml_function_coverage=1 00:16:09.919 --rc genhtml_legend=1 00:16:09.919 --rc geninfo_all_blocks=1 00:16:09.919 --rc geninfo_unexecuted_blocks=1 00:16:09.919 00:16:09.919 ' 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:16:09.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.919 --rc genhtml_branch_coverage=1 00:16:09.919 --rc genhtml_function_coverage=1 00:16:09.919 --rc genhtml_legend=1 00:16:09.919 --rc geninfo_all_blocks=1 00:16:09.919 --rc geninfo_unexecuted_blocks=1 00:16:09.919 00:16:09.919 ' 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:16:09.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.919 --rc genhtml_branch_coverage=1 00:16:09.919 --rc genhtml_function_coverage=1 00:16:09.919 --rc genhtml_legend=1 00:16:09.919 --rc geninfo_all_blocks=1 00:16:09.919 --rc geninfo_unexecuted_blocks=1 00:16:09.919 00:16:09.919 ' 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:09.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:09.919 04:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:11.820 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:11.820 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:11.820 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:11.820 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:11.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:16:11.820 00:16:11.820 --- 10.0.0.2 ping statistics --- 00:16:11.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.820 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:11.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:16:11.820 00:16:11.820 --- 10.0.0.1 ping statistics --- 00:16:11.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.820 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:11.820 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:11.821 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:11.821 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:11.821 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:11.821 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.821 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=2294072 00:16:11.821 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:11.821 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 2294072 00:16:11.821 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2294072 ']' 00:16:11.821 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.821 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:11.821 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.821 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:11.821 04:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.821 [2024-10-28 04:52:02.273602] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:16:11.821 [2024-10-28 04:52:02.273709] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.821 [2024-10-28 04:52:02.413680] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:12.079 [2024-10-28 04:52:02.455873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:12.079 [2024-10-28 04:52:02.511006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.079 [2024-10-28 04:52:02.511072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.079 [2024-10-28 04:52:02.511089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.079 [2024-10-28 04:52:02.511103] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.079 [2024-10-28 04:52:02.511115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.079 [2024-10-28 04:52:02.512773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:12.079 [2024-10-28 04:52:02.512831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:12.079 [2024-10-28 04:52:02.512836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.014 [2024-10-28 04:52:03.285273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.014 [2024-10-28 04:52:03.302483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.014 NULL1 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2294224 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.014 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.273 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.273 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:13.273 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.273 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.273 04:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.531 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.531 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:13.531 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.531 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.531 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.789 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.789 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:13.789 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.789 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.789 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.356 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.356 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:14.356 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.356 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.356 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.614 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.614 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:14.614 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.614 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.614 04:52:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.872 04:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.872 04:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:14.872 04:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.872 04:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.872 04:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.130 04:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.130 04:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:15.130 04:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.130 04:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.130 04:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.388 04:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.388 04:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:15.388 04:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.388 04:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.388 04:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.954 04:52:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.954 04:52:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:15.954 04:52:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.954 04:52:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.954 04:52:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.212 04:52:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.212 04:52:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:16.212 04:52:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.212 04:52:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.212 04:52:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.469 04:52:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.469 04:52:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:16.470 04:52:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.470 04:52:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.470 04:52:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.728 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.728 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:16.728 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.728 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.728 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.985 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.985 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:16.985 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.985 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.985 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.551 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.551 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:17.551 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.551 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.551 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.809 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.809 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:17.809 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.809 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.809 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.067 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.067 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:18.067 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.067 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.067 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.325 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.325 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:18.325 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.325 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.325 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.583 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.583 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:18.583 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.583 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.583 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.149 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.149 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:19.149 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.149 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.149 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.406 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.406 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:19.406 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.406 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.406 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.663 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.663 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:19.663 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.663 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.663 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.920 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.921 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:19.921 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.921 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.921 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.178 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.178 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:20.178 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.178 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.178 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.744 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.744 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:20.744 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.744 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.744 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.003 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.003 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:21.003 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.003 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.003 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.261 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.261 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:21.261 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.261 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.261 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.519 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.519 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:21.519 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.519 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.519 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.777 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.777 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:21.777 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.777 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.777 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:22.343 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.343 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:22.343 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.343 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.343 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:22.600 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.600 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:22.600 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.600 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.600 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:22.893 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.893 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:22.893 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.893 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.893 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.151 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2294224 00:16:23.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2294224) - No such process 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2294224 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:23.151 rmmod nvme_tcp 00:16:23.151 rmmod nvme_fabrics 00:16:23.151 rmmod nvme_keyring 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 2294072 ']' 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 2294072 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2294072 ']' 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2294072 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2294072 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2294072' 00:16:23.151 killing process with pid 2294072 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2294072 00:16:23.151 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2294072 00:16:23.409 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:23.409 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:23.409 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:23.409 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:23.409 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:16:23.409 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:23.409 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:16:23.409 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:23.409 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:23.409 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.409 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.410 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.374 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:25.374 00:16:25.374 real 0m15.976s 00:16:25.374 user 0m40.482s 00:16:25.374 sys 0m5.777s 00:16:25.374 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.374 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.374 ************************************ 00:16:25.374 END TEST nvmf_connect_stress 00:16:25.374 ************************************ 00:16:25.633 04:52:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:25.633 04:52:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:25.633 04:52:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.633 04:52:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:25.633 ************************************ 00:16:25.633 START TEST nvmf_fused_ordering 00:16:25.633 ************************************ 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:25.633 * Looking for test storage... 00:16:25.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1689 -- # lcov --version 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:16:25.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.633 --rc genhtml_branch_coverage=1 00:16:25.633 --rc genhtml_function_coverage=1 00:16:25.633 --rc genhtml_legend=1 00:16:25.633 --rc geninfo_all_blocks=1 00:16:25.633 --rc geninfo_unexecuted_blocks=1 00:16:25.633 00:16:25.633 ' 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:16:25.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.633 --rc genhtml_branch_coverage=1 00:16:25.633 --rc genhtml_function_coverage=1 00:16:25.633 --rc genhtml_legend=1 00:16:25.633 --rc geninfo_all_blocks=1 00:16:25.633 --rc geninfo_unexecuted_blocks=1 00:16:25.633 00:16:25.633 ' 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:16:25.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.633 --rc genhtml_branch_coverage=1 00:16:25.633 --rc genhtml_function_coverage=1 00:16:25.633 --rc genhtml_legend=1 00:16:25.633 --rc geninfo_all_blocks=1 00:16:25.633 --rc geninfo_unexecuted_blocks=1 00:16:25.633 00:16:25.633 ' 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:16:25.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.633 --rc genhtml_branch_coverage=1 00:16:25.633 --rc genhtml_function_coverage=1 00:16:25.633 --rc genhtml_legend=1 00:16:25.633 --rc geninfo_all_blocks=1 00:16:25.633 --rc geninfo_unexecuted_blocks=1 00:16:25.633 00:16:25.633 ' 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.633 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:25.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:16:25.634 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:28.167 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:28.167 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:28.167 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:28.168 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:28.168 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:28.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:16:28.168 00:16:28.168 --- 10.0.0.2 ping statistics --- 00:16:28.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.168 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:28.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:16:28.168 00:16:28.168 --- 10.0.0.1 ping statistics --- 00:16:28.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.168 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=2297349 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 2297349 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2297349 ']' 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:28.168 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:28.168 [2024-10-28 04:52:18.399360] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:16:28.168 [2024-10-28 04:52:18.399445] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.168 [2024-10-28 04:52:18.538929] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:28.168 [2024-10-28 04:52:18.575598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.168 [2024-10-28 04:52:18.620043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.168 [2024-10-28 04:52:18.620095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.168 [2024-10-28 04:52:18.620124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.168 [2024-10-28 04:52:18.620135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.168 [2024-10-28 04:52:18.620145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.168 [2024-10-28 04:52:18.620730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:29.102 [2024-10-28 04:52:19.470485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:29.102 [2024-10-28 04:52:19.486632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:29.102 NULL1 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.102 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:29.102 [2024-10-28 04:52:19.532770] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:16:29.102 [2024-10-28 04:52:19.532815] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2297499 ] 00:16:29.102 [2024-10-28 04:52:19.665401] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:29.667 Attached to nqn.2016-06.io.spdk:cnode1 00:16:29.667 Namespace ID: 1 size: 1GB 00:16:29.667 fused_ordering(0) 00:16:29.667 fused_ordering(1) 00:16:29.667 fused_ordering(2) 00:16:29.667 fused_ordering(3) 00:16:29.667 fused_ordering(4) 00:16:29.667 fused_ordering(5) 00:16:29.667 fused_ordering(6) 00:16:29.667 fused_ordering(7) 00:16:29.667 fused_ordering(8) 00:16:29.667 fused_ordering(9) 00:16:29.667 fused_ordering(10) 00:16:29.667 fused_ordering(11) 00:16:29.667 fused_ordering(12) 00:16:29.667 fused_ordering(13) 00:16:29.667 fused_ordering(14) 00:16:29.667 fused_ordering(15) 00:16:29.667 fused_ordering(16) 00:16:29.667 fused_ordering(17) 00:16:29.667 fused_ordering(18) 00:16:29.667 fused_ordering(19) 00:16:29.667 fused_ordering(20) 00:16:29.667 fused_ordering(21) 00:16:29.667 fused_ordering(22) 00:16:29.667 fused_ordering(23) 00:16:29.667 fused_ordering(24) 00:16:29.667 fused_ordering(25) 00:16:29.667 fused_ordering(26) 00:16:29.667 fused_ordering(27) 00:16:29.667 fused_ordering(28) 00:16:29.667 fused_ordering(29) 00:16:29.667 fused_ordering(30) 00:16:29.667 fused_ordering(31) 00:16:29.667 fused_ordering(32) 00:16:29.667 fused_ordering(33) 00:16:29.667 fused_ordering(34) 00:16:29.667 fused_ordering(35) 00:16:29.667 fused_ordering(36) 00:16:29.667 fused_ordering(37) 00:16:29.667 fused_ordering(38) 00:16:29.667 fused_ordering(39) 00:16:29.667 fused_ordering(40) 00:16:29.667 fused_ordering(41) 00:16:29.667 fused_ordering(42) 00:16:29.667 fused_ordering(43) 00:16:29.667 fused_ordering(44) 00:16:29.667 fused_ordering(45) 00:16:29.667 fused_ordering(46) 00:16:29.667 fused_ordering(47) 00:16:29.667 fused_ordering(48) 00:16:29.667 fused_ordering(49) 00:16:29.667 fused_ordering(50) 00:16:29.667 fused_ordering(51) 00:16:29.667 fused_ordering(52) 00:16:29.667 fused_ordering(53) 00:16:29.667 fused_ordering(54) 00:16:29.667 fused_ordering(55) 00:16:29.667 fused_ordering(56) 00:16:29.667 fused_ordering(57) 00:16:29.667 fused_ordering(58) 00:16:29.667 fused_ordering(59) 00:16:29.667 fused_ordering(60) 00:16:29.667 fused_ordering(61) 00:16:29.667 fused_ordering(62) 00:16:29.667 fused_ordering(63) 00:16:29.667 fused_ordering(64) 00:16:29.667 fused_ordering(65) 00:16:29.667 fused_ordering(66) 00:16:29.667 fused_ordering(67) 00:16:29.667 fused_ordering(68) 00:16:29.667 fused_ordering(69) 00:16:29.667 fused_ordering(70) 00:16:29.667 fused_ordering(71) 00:16:29.667 fused_ordering(72) 00:16:29.667 fused_ordering(73) 00:16:29.667 fused_ordering(74) 00:16:29.667 fused_ordering(75) 00:16:29.667 fused_ordering(76) 00:16:29.667 fused_ordering(77) 00:16:29.667 fused_ordering(78) 00:16:29.667 fused_ordering(79) 00:16:29.667 fused_ordering(80) 00:16:29.667 fused_ordering(81) 00:16:29.667 fused_ordering(82) 00:16:29.667 fused_ordering(83) 00:16:29.667 fused_ordering(84) 00:16:29.667 fused_ordering(85) 00:16:29.667 fused_ordering(86) 00:16:29.667 fused_ordering(87) 00:16:29.667 fused_ordering(88) 00:16:29.667 fused_ordering(89) 00:16:29.667 fused_ordering(90) 00:16:29.667 fused_ordering(91) 00:16:29.667 fused_ordering(92) 00:16:29.667 fused_ordering(93) 00:16:29.667 fused_ordering(94) 00:16:29.667 fused_ordering(95) 00:16:29.667 fused_ordering(96) 00:16:29.667 fused_ordering(97) 00:16:29.667 fused_ordering(98) 00:16:29.667 fused_ordering(99) 00:16:29.668 fused_ordering(100) 00:16:29.668 fused_ordering(101) 00:16:29.668 fused_ordering(102) 00:16:29.668 fused_ordering(103) 00:16:29.668 fused_ordering(104) 00:16:29.668 fused_ordering(105) 00:16:29.668 fused_ordering(106) 00:16:29.668 fused_ordering(107) 00:16:29.668 fused_ordering(108) 00:16:29.668 fused_ordering(109) 00:16:29.668 fused_ordering(110) 00:16:29.668 fused_ordering(111) 00:16:29.668 fused_ordering(112) 00:16:29.668 fused_ordering(113) 00:16:29.668 fused_ordering(114) 00:16:29.668 fused_ordering(115) 00:16:29.668 fused_ordering(116) 00:16:29.668 fused_ordering(117) 00:16:29.668 fused_ordering(118) 00:16:29.668 fused_ordering(119) 00:16:29.668 fused_ordering(120) 00:16:29.668 fused_ordering(121) 00:16:29.668 fused_ordering(122) 00:16:29.668 fused_ordering(123) 00:16:29.668 fused_ordering(124) 00:16:29.668 fused_ordering(125) 00:16:29.668 fused_ordering(126) 00:16:29.668 fused_ordering(127) 00:16:29.668 fused_ordering(128) 00:16:29.668 fused_ordering(129) 00:16:29.668 fused_ordering(130) 00:16:29.668 fused_ordering(131) 00:16:29.668 fused_ordering(132) 00:16:29.668 fused_ordering(133) 00:16:29.668 fused_ordering(134) 00:16:29.668 fused_ordering(135) 00:16:29.668 fused_ordering(136) 00:16:29.668 fused_ordering(137) 00:16:29.668 fused_ordering(138) 00:16:29.668 fused_ordering(139) 00:16:29.668 fused_ordering(140) 00:16:29.668 fused_ordering(141) 00:16:29.668 fused_ordering(142) 00:16:29.668 fused_ordering(143) 00:16:29.668 fused_ordering(144) 00:16:29.668 fused_ordering(145) 00:16:29.668 fused_ordering(146) 00:16:29.668 fused_ordering(147) 00:16:29.668 fused_ordering(148) 00:16:29.668 fused_ordering(149) 00:16:29.668 fused_ordering(150) 00:16:29.668 fused_ordering(151) 00:16:29.668 fused_ordering(152) 00:16:29.668 fused_ordering(153) 00:16:29.668 fused_ordering(154) 00:16:29.668 fused_ordering(155) 00:16:29.668 fused_ordering(156) 00:16:29.668 fused_ordering(157) 00:16:29.668 fused_ordering(158) 00:16:29.668 fused_ordering(159) 00:16:29.668 fused_ordering(160) 00:16:29.668 fused_ordering(161) 00:16:29.668 fused_ordering(162) 00:16:29.668 fused_ordering(163) 00:16:29.668 fused_ordering(164) 00:16:29.668 fused_ordering(165) 00:16:29.668 fused_ordering(166) 00:16:29.668 fused_ordering(167) 00:16:29.668 fused_ordering(168) 00:16:29.668 fused_ordering(169) 00:16:29.668 fused_ordering(170) 00:16:29.668 fused_ordering(171) 00:16:29.668 fused_ordering(172) 00:16:29.668 fused_ordering(173) 00:16:29.668 fused_ordering(174) 00:16:29.668 fused_ordering(175) 00:16:29.668 fused_ordering(176) 00:16:29.668 fused_ordering(177) 00:16:29.668 fused_ordering(178) 00:16:29.668 fused_ordering(179) 00:16:29.668 fused_ordering(180) 00:16:29.668 fused_ordering(181) 00:16:29.668 fused_ordering(182) 00:16:29.668 fused_ordering(183) 00:16:29.668 fused_ordering(184) 00:16:29.668 fused_ordering(185) 00:16:29.668 fused_ordering(186) 00:16:29.668 fused_ordering(187) 00:16:29.668 fused_ordering(188) 00:16:29.668 fused_ordering(189) 00:16:29.668 fused_ordering(190) 00:16:29.668 fused_ordering(191) 00:16:29.668 fused_ordering(192) 00:16:29.668 fused_ordering(193) 00:16:29.668 fused_ordering(194) 00:16:29.668 fused_ordering(195) 00:16:29.668 fused_ordering(196) 00:16:29.668 fused_ordering(197) 00:16:29.668 fused_ordering(198) 00:16:29.668 fused_ordering(199) 00:16:29.668 fused_ordering(200) 00:16:29.668 fused_ordering(201) 00:16:29.668 fused_ordering(202) 00:16:29.668 fused_ordering(203) 00:16:29.668 fused_ordering(204) 00:16:29.668 fused_ordering(205) 00:16:29.926 fused_ordering(206) 00:16:29.926 fused_ordering(207) 00:16:29.926 fused_ordering(208) 00:16:29.926 fused_ordering(209) 00:16:29.926 fused_ordering(210) 00:16:29.926 fused_ordering(211) 00:16:29.926 fused_ordering(212) 00:16:29.926 fused_ordering(213) 00:16:29.926 fused_ordering(214) 00:16:29.926 fused_ordering(215) 00:16:29.926 fused_ordering(216) 00:16:29.926 fused_ordering(217) 00:16:29.926 fused_ordering(218) 00:16:29.926 fused_ordering(219) 00:16:29.926 fused_ordering(220) 00:16:29.926 fused_ordering(221) 00:16:29.926 fused_ordering(222) 00:16:29.926 fused_ordering(223) 00:16:29.926 fused_ordering(224) 00:16:29.926 fused_ordering(225) 00:16:29.926 fused_ordering(226) 00:16:29.926 fused_ordering(227) 00:16:29.926 fused_ordering(228) 00:16:29.926 fused_ordering(229) 00:16:29.926 fused_ordering(230) 00:16:29.926 fused_ordering(231) 00:16:29.926 fused_ordering(232) 00:16:29.926 fused_ordering(233) 00:16:29.926 fused_ordering(234) 00:16:29.926 fused_ordering(235) 00:16:29.926 fused_ordering(236) 00:16:29.926 fused_ordering(237) 00:16:29.926 fused_ordering(238) 00:16:29.926 fused_ordering(239) 00:16:29.926 fused_ordering(240) 00:16:29.926 fused_ordering(241) 00:16:29.926 fused_ordering(242) 00:16:29.926 fused_ordering(243) 00:16:29.926 fused_ordering(244) 00:16:29.926 fused_ordering(245) 00:16:29.926 fused_ordering(246) 00:16:29.926 fused_ordering(247) 00:16:29.926 fused_ordering(248) 00:16:29.926 fused_ordering(249) 00:16:29.926 fused_ordering(250) 00:16:29.926 fused_ordering(251) 00:16:29.926 fused_ordering(252) 00:16:29.926 fused_ordering(253) 00:16:29.926 fused_ordering(254) 00:16:29.926 fused_ordering(255) 00:16:29.926 fused_ordering(256) 00:16:29.926 fused_ordering(257) 00:16:29.926 fused_ordering(258) 00:16:29.926 fused_ordering(259) 00:16:29.926 fused_ordering(260) 00:16:29.926 fused_ordering(261) 00:16:29.926 fused_ordering(262) 00:16:29.926 fused_ordering(263) 00:16:29.926 fused_ordering(264) 00:16:29.926 fused_ordering(265) 00:16:29.926 fused_ordering(266) 00:16:29.926 fused_ordering(267) 00:16:29.926 fused_ordering(268) 00:16:29.926 fused_ordering(269) 00:16:29.926 fused_ordering(270) 00:16:29.926 fused_ordering(271) 00:16:29.926 fused_ordering(272) 00:16:29.926 fused_ordering(273) 00:16:29.926 fused_ordering(274) 00:16:29.926 fused_ordering(275) 00:16:29.926 fused_ordering(276) 00:16:29.926 fused_ordering(277) 00:16:29.926 fused_ordering(278) 00:16:29.926 fused_ordering(279) 00:16:29.926 fused_ordering(280) 00:16:29.926 fused_ordering(281) 00:16:29.926 fused_ordering(282) 00:16:29.926 fused_ordering(283) 00:16:29.926 fused_ordering(284) 00:16:29.926 fused_ordering(285) 00:16:29.926 fused_ordering(286) 00:16:29.926 fused_ordering(287) 00:16:29.926 fused_ordering(288) 00:16:29.926 fused_ordering(289) 00:16:29.926 fused_ordering(290) 00:16:29.926 fused_ordering(291) 00:16:29.926 fused_ordering(292) 00:16:29.926 fused_ordering(293) 00:16:29.926 fused_ordering(294) 00:16:29.926 fused_ordering(295) 00:16:29.926 fused_ordering(296) 00:16:29.926 fused_ordering(297) 00:16:29.926 fused_ordering(298) 00:16:29.926 fused_ordering(299) 00:16:29.926 fused_ordering(300) 00:16:29.926 fused_ordering(301) 00:16:29.926 fused_ordering(302) 00:16:29.926 fused_ordering(303) 00:16:29.926 fused_ordering(304) 00:16:29.926 fused_ordering(305) 00:16:29.926 fused_ordering(306) 00:16:29.926 fused_ordering(307) 00:16:29.926 fused_ordering(308) 00:16:29.926 fused_ordering(309) 00:16:29.926 fused_ordering(310) 00:16:29.926 fused_ordering(311) 00:16:29.926 fused_ordering(312) 00:16:29.926 fused_ordering(313) 00:16:29.926 fused_ordering(314) 00:16:29.926 fused_ordering(315) 00:16:29.926 fused_ordering(316) 00:16:29.926 fused_ordering(317) 00:16:29.926 fused_ordering(318) 00:16:29.926 fused_ordering(319) 00:16:29.926 fused_ordering(320) 00:16:29.926 fused_ordering(321) 00:16:29.926 fused_ordering(322) 00:16:29.926 fused_ordering(323) 00:16:29.926 fused_ordering(324) 00:16:29.926 fused_ordering(325) 00:16:29.926 fused_ordering(326) 00:16:29.926 fused_ordering(327) 00:16:29.926 fused_ordering(328) 00:16:29.926 fused_ordering(329) 00:16:29.926 fused_ordering(330) 00:16:29.926 fused_ordering(331) 00:16:29.926 fused_ordering(332) 00:16:29.926 fused_ordering(333) 00:16:29.926 fused_ordering(334) 00:16:29.926 fused_ordering(335) 00:16:29.926 fused_ordering(336) 00:16:29.926 fused_ordering(337) 00:16:29.926 fused_ordering(338) 00:16:29.926 fused_ordering(339) 00:16:29.926 fused_ordering(340) 00:16:29.926 fused_ordering(341) 00:16:29.926 fused_ordering(342) 00:16:29.926 fused_ordering(343) 00:16:29.926 fused_ordering(344) 00:16:29.926 fused_ordering(345) 00:16:29.926 fused_ordering(346) 00:16:29.926 fused_ordering(347) 00:16:29.926 fused_ordering(348) 00:16:29.926 fused_ordering(349) 00:16:29.926 fused_ordering(350) 00:16:29.926 fused_ordering(351) 00:16:29.926 fused_ordering(352) 00:16:29.926 fused_ordering(353) 00:16:29.926 fused_ordering(354) 00:16:29.926 fused_ordering(355) 00:16:29.926 fused_ordering(356) 00:16:29.926 fused_ordering(357) 00:16:29.926 fused_ordering(358) 00:16:29.926 fused_ordering(359) 00:16:29.926 fused_ordering(360) 00:16:29.926 fused_ordering(361) 00:16:29.926 fused_ordering(362) 00:16:29.926 fused_ordering(363) 00:16:29.926 fused_ordering(364) 00:16:29.926 fused_ordering(365) 00:16:29.926 fused_ordering(366) 00:16:29.926 fused_ordering(367) 00:16:29.926 fused_ordering(368) 00:16:29.926 fused_ordering(369) 00:16:29.926 fused_ordering(370) 00:16:29.926 fused_ordering(371) 00:16:29.926 fused_ordering(372) 00:16:29.926 fused_ordering(373) 00:16:29.926 fused_ordering(374) 00:16:29.926 fused_ordering(375) 00:16:29.926 fused_ordering(376) 00:16:29.926 fused_ordering(377) 00:16:29.926 fused_ordering(378) 00:16:29.926 fused_ordering(379) 00:16:29.926 fused_ordering(380) 00:16:29.926 fused_ordering(381) 00:16:29.926 fused_ordering(382) 00:16:29.926 fused_ordering(383) 00:16:29.926 fused_ordering(384) 00:16:29.926 fused_ordering(385) 00:16:29.926 fused_ordering(386) 00:16:29.926 fused_ordering(387) 00:16:29.926 fused_ordering(388) 00:16:29.926 fused_ordering(389) 00:16:29.926 fused_ordering(390) 00:16:29.926 fused_ordering(391) 00:16:29.926 fused_ordering(392) 00:16:29.926 fused_ordering(393) 00:16:29.926 fused_ordering(394) 00:16:29.926 fused_ordering(395) 00:16:29.926 fused_ordering(396) 00:16:29.926 fused_ordering(397) 00:16:29.926 fused_ordering(398) 00:16:29.926 fused_ordering(399) 00:16:29.926 fused_ordering(400) 00:16:29.926 fused_ordering(401) 00:16:29.926 fused_ordering(402) 00:16:29.926 fused_ordering(403) 00:16:29.926 fused_ordering(404) 00:16:29.926 fused_ordering(405) 00:16:29.926 fused_ordering(406) 00:16:29.926 fused_ordering(407) 00:16:29.926 fused_ordering(408) 00:16:29.926 fused_ordering(409) 00:16:29.926 fused_ordering(410) 00:16:30.492 fused_ordering(411) 00:16:30.492 fused_ordering(412) 00:16:30.492 fused_ordering(413) 00:16:30.492 fused_ordering(414) 00:16:30.492 fused_ordering(415) 00:16:30.492 fused_ordering(416) 00:16:30.492 fused_ordering(417) 00:16:30.492 fused_ordering(418) 00:16:30.492 fused_ordering(419) 00:16:30.492 fused_ordering(420) 00:16:30.492 fused_ordering(421) 00:16:30.492 fused_ordering(422) 00:16:30.492 fused_ordering(423) 00:16:30.492 fused_ordering(424) 00:16:30.492 fused_ordering(425) 00:16:30.492 fused_ordering(426) 00:16:30.492 fused_ordering(427) 00:16:30.492 fused_ordering(428) 00:16:30.492 fused_ordering(429) 00:16:30.492 fused_ordering(430) 00:16:30.492 fused_ordering(431) 00:16:30.492 fused_ordering(432) 00:16:30.492 fused_ordering(433) 00:16:30.492 fused_ordering(434) 00:16:30.492 fused_ordering(435) 00:16:30.492 fused_ordering(436) 00:16:30.492 fused_ordering(437) 00:16:30.492 fused_ordering(438) 00:16:30.492 fused_ordering(439) 00:16:30.492 fused_ordering(440) 00:16:30.492 fused_ordering(441) 00:16:30.492 fused_ordering(442) 00:16:30.492 fused_ordering(443) 00:16:30.492 fused_ordering(444) 00:16:30.492 fused_ordering(445) 00:16:30.492 fused_ordering(446) 00:16:30.492 fused_ordering(447) 00:16:30.492 fused_ordering(448) 00:16:30.492 fused_ordering(449) 00:16:30.492 fused_ordering(450) 00:16:30.492 fused_ordering(451) 00:16:30.492 fused_ordering(452) 00:16:30.492 fused_ordering(453) 00:16:30.492 fused_ordering(454) 00:16:30.492 fused_ordering(455) 00:16:30.492 fused_ordering(456) 00:16:30.492 fused_ordering(457) 00:16:30.492 fused_ordering(458) 00:16:30.492 fused_ordering(459) 00:16:30.492 fused_ordering(460) 00:16:30.492 fused_ordering(461) 00:16:30.492 fused_ordering(462) 00:16:30.492 fused_ordering(463) 00:16:30.492 fused_ordering(464) 00:16:30.492 fused_ordering(465) 00:16:30.492 fused_ordering(466) 00:16:30.492 fused_ordering(467) 00:16:30.492 fused_ordering(468) 00:16:30.492 fused_ordering(469) 00:16:30.492 fused_ordering(470) 00:16:30.492 fused_ordering(471) 00:16:30.492 fused_ordering(472) 00:16:30.492 fused_ordering(473) 00:16:30.492 fused_ordering(474) 00:16:30.492 fused_ordering(475) 00:16:30.492 fused_ordering(476) 00:16:30.492 fused_ordering(477) 00:16:30.492 fused_ordering(478) 00:16:30.492 fused_ordering(479) 00:16:30.492 fused_ordering(480) 00:16:30.492 fused_ordering(481) 00:16:30.492 fused_ordering(482) 00:16:30.492 fused_ordering(483) 00:16:30.492 fused_ordering(484) 00:16:30.492 fused_ordering(485) 00:16:30.492 fused_ordering(486) 00:16:30.492 fused_ordering(487) 00:16:30.492 fused_ordering(488) 00:16:30.492 fused_ordering(489) 00:16:30.492 fused_ordering(490) 00:16:30.492 fused_ordering(491) 00:16:30.492 fused_ordering(492) 00:16:30.492 fused_ordering(493) 00:16:30.492 fused_ordering(494) 00:16:30.492 fused_ordering(495) 00:16:30.492 fused_ordering(496) 00:16:30.492 fused_ordering(497) 00:16:30.492 fused_ordering(498) 00:16:30.492 fused_ordering(499) 00:16:30.492 fused_ordering(500) 00:16:30.492 fused_ordering(501) 00:16:30.492 fused_ordering(502) 00:16:30.492 fused_ordering(503) 00:16:30.492 fused_ordering(504) 00:16:30.492 fused_ordering(505) 00:16:30.492 fused_ordering(506) 00:16:30.492 fused_ordering(507) 00:16:30.492 fused_ordering(508) 00:16:30.492 fused_ordering(509) 00:16:30.492 fused_ordering(510) 00:16:30.492 fused_ordering(511) 00:16:30.492 fused_ordering(512) 00:16:30.492 fused_ordering(513) 00:16:30.492 fused_ordering(514) 00:16:30.492 fused_ordering(515) 00:16:30.492 fused_ordering(516) 00:16:30.492 fused_ordering(517) 00:16:30.492 fused_ordering(518) 00:16:30.492 fused_ordering(519) 00:16:30.492 fused_ordering(520) 00:16:30.492 fused_ordering(521) 00:16:30.492 fused_ordering(522) 00:16:30.492 fused_ordering(523) 00:16:30.492 fused_ordering(524) 00:16:30.492 fused_ordering(525) 00:16:30.492 fused_ordering(526) 00:16:30.492 fused_ordering(527) 00:16:30.492 fused_ordering(528) 00:16:30.492 fused_ordering(529) 00:16:30.492 fused_ordering(530) 00:16:30.492 fused_ordering(531) 00:16:30.492 fused_ordering(532) 00:16:30.492 fused_ordering(533) 00:16:30.492 fused_ordering(534) 00:16:30.492 fused_ordering(535) 00:16:30.492 fused_ordering(536) 00:16:30.492 fused_ordering(537) 00:16:30.492 fused_ordering(538) 00:16:30.492 fused_ordering(539) 00:16:30.492 fused_ordering(540) 00:16:30.492 fused_ordering(541) 00:16:30.492 fused_ordering(542) 00:16:30.492 fused_ordering(543) 00:16:30.492 fused_ordering(544) 00:16:30.492 fused_ordering(545) 00:16:30.492 fused_ordering(546) 00:16:30.492 fused_ordering(547) 00:16:30.492 fused_ordering(548) 00:16:30.492 fused_ordering(549) 00:16:30.492 fused_ordering(550) 00:16:30.492 fused_ordering(551) 00:16:30.492 fused_ordering(552) 00:16:30.492 fused_ordering(553) 00:16:30.492 fused_ordering(554) 00:16:30.492 fused_ordering(555) 00:16:30.492 fused_ordering(556) 00:16:30.492 fused_ordering(557) 00:16:30.492 fused_ordering(558) 00:16:30.492 fused_ordering(559) 00:16:30.492 fused_ordering(560) 00:16:30.492 fused_ordering(561) 00:16:30.492 fused_ordering(562) 00:16:30.492 fused_ordering(563) 00:16:30.492 fused_ordering(564) 00:16:30.492 fused_ordering(565) 00:16:30.492 fused_ordering(566) 00:16:30.492 fused_ordering(567) 00:16:30.492 fused_ordering(568) 00:16:30.492 fused_ordering(569) 00:16:30.492 fused_ordering(570) 00:16:30.492 fused_ordering(571) 00:16:30.492 fused_ordering(572) 00:16:30.492 fused_ordering(573) 00:16:30.492 fused_ordering(574) 00:16:30.492 fused_ordering(575) 00:16:30.492 fused_ordering(576) 00:16:30.492 fused_ordering(577) 00:16:30.492 fused_ordering(578) 00:16:30.492 fused_ordering(579) 00:16:30.492 fused_ordering(580) 00:16:30.492 fused_ordering(581) 00:16:30.492 fused_ordering(582) 00:16:30.492 fused_ordering(583) 00:16:30.492 fused_ordering(584) 00:16:30.492 fused_ordering(585) 00:16:30.492 fused_ordering(586) 00:16:30.492 fused_ordering(587) 00:16:30.492 fused_ordering(588) 00:16:30.492 fused_ordering(589) 00:16:30.492 fused_ordering(590) 00:16:30.492 fused_ordering(591) 00:16:30.492 fused_ordering(592) 00:16:30.492 fused_ordering(593) 00:16:30.492 fused_ordering(594) 00:16:30.492 fused_ordering(595) 00:16:30.492 fused_ordering(596) 00:16:30.492 fused_ordering(597) 00:16:30.492 fused_ordering(598) 00:16:30.492 fused_ordering(599) 00:16:30.492 fused_ordering(600) 00:16:30.492 fused_ordering(601) 00:16:30.492 fused_ordering(602) 00:16:30.492 fused_ordering(603) 00:16:30.492 fused_ordering(604) 00:16:30.492 fused_ordering(605) 00:16:30.492 fused_ordering(606) 00:16:30.492 fused_ordering(607) 00:16:30.492 fused_ordering(608) 00:16:30.492 fused_ordering(609) 00:16:30.492 fused_ordering(610) 00:16:30.492 fused_ordering(611) 00:16:30.492 fused_ordering(612) 00:16:30.492 fused_ordering(613) 00:16:30.493 fused_ordering(614) 00:16:30.493 fused_ordering(615) 00:16:31.058 fused_ordering(616) 00:16:31.058 fused_ordering(617) 00:16:31.058 fused_ordering(618) 00:16:31.058 fused_ordering(619) 00:16:31.058 fused_ordering(620) 00:16:31.058 fused_ordering(621) 00:16:31.058 fused_ordering(622) 00:16:31.058 fused_ordering(623) 00:16:31.058 fused_ordering(624) 00:16:31.058 fused_ordering(625) 00:16:31.058 fused_ordering(626) 00:16:31.058 fused_ordering(627) 00:16:31.058 fused_ordering(628) 00:16:31.058 fused_ordering(629) 00:16:31.058 fused_ordering(630) 00:16:31.058 fused_ordering(631) 00:16:31.058 fused_ordering(632) 00:16:31.058 fused_ordering(633) 00:16:31.058 fused_ordering(634) 00:16:31.058 fused_ordering(635) 00:16:31.058 fused_ordering(636) 00:16:31.058 fused_ordering(637) 00:16:31.058 fused_ordering(638) 00:16:31.058 fused_ordering(639) 00:16:31.058 fused_ordering(640) 00:16:31.058 fused_ordering(641) 00:16:31.058 fused_ordering(642) 00:16:31.058 fused_ordering(643) 00:16:31.058 fused_ordering(644) 00:16:31.058 fused_ordering(645) 00:16:31.058 fused_ordering(646) 00:16:31.058 fused_ordering(647) 00:16:31.058 fused_ordering(648) 00:16:31.058 fused_ordering(649) 00:16:31.058 fused_ordering(650) 00:16:31.058 fused_ordering(651) 00:16:31.058 fused_ordering(652) 00:16:31.058 fused_ordering(653) 00:16:31.058 fused_ordering(654) 00:16:31.058 fused_ordering(655) 00:16:31.058 fused_ordering(656) 00:16:31.058 fused_ordering(657) 00:16:31.058 fused_ordering(658) 00:16:31.058 fused_ordering(659) 00:16:31.058 fused_ordering(660) 00:16:31.058 fused_ordering(661) 00:16:31.058 fused_ordering(662) 00:16:31.058 fused_ordering(663) 00:16:31.058 fused_ordering(664) 00:16:31.058 fused_ordering(665) 00:16:31.058 fused_ordering(666) 00:16:31.058 fused_ordering(667) 00:16:31.058 fused_ordering(668) 00:16:31.058 fused_ordering(669) 00:16:31.058 fused_ordering(670) 00:16:31.058 fused_ordering(671) 00:16:31.058 fused_ordering(672) 00:16:31.058 fused_ordering(673) 00:16:31.058 fused_ordering(674) 00:16:31.058 fused_ordering(675) 00:16:31.058 fused_ordering(676) 00:16:31.058 fused_ordering(677) 00:16:31.058 fused_ordering(678) 00:16:31.058 fused_ordering(679) 00:16:31.058 fused_ordering(680) 00:16:31.058 fused_ordering(681) 00:16:31.058 fused_ordering(682) 00:16:31.058 fused_ordering(683) 00:16:31.058 fused_ordering(684) 00:16:31.058 fused_ordering(685) 00:16:31.058 fused_ordering(686) 00:16:31.058 fused_ordering(687) 00:16:31.058 fused_ordering(688) 00:16:31.058 fused_ordering(689) 00:16:31.058 fused_ordering(690) 00:16:31.058 fused_ordering(691) 00:16:31.059 fused_ordering(692) 00:16:31.059 fused_ordering(693) 00:16:31.059 fused_ordering(694) 00:16:31.059 fused_ordering(695) 00:16:31.059 fused_ordering(696) 00:16:31.059 fused_ordering(697) 00:16:31.059 fused_ordering(698) 00:16:31.059 fused_ordering(699) 00:16:31.059 fused_ordering(700) 00:16:31.059 fused_ordering(701) 00:16:31.059 fused_ordering(702) 00:16:31.059 fused_ordering(703) 00:16:31.059 fused_ordering(704) 00:16:31.059 fused_ordering(705) 00:16:31.059 fused_ordering(706) 00:16:31.059 fused_ordering(707) 00:16:31.059 fused_ordering(708) 00:16:31.059 fused_ordering(709) 00:16:31.059 fused_ordering(710) 00:16:31.059 fused_ordering(711) 00:16:31.059 fused_ordering(712) 00:16:31.059 fused_ordering(713) 00:16:31.059 fused_ordering(714) 00:16:31.059 fused_ordering(715) 00:16:31.059 fused_ordering(716) 00:16:31.059 fused_ordering(717) 00:16:31.059 fused_ordering(718) 00:16:31.059 fused_ordering(719) 00:16:31.059 fused_ordering(720) 00:16:31.059 fused_ordering(721) 00:16:31.059 fused_ordering(722) 00:16:31.059 fused_ordering(723) 00:16:31.059 fused_ordering(724) 00:16:31.059 fused_ordering(725) 00:16:31.059 fused_ordering(726) 00:16:31.059 fused_ordering(727) 00:16:31.059 fused_ordering(728) 00:16:31.059 fused_ordering(729) 00:16:31.059 fused_ordering(730) 00:16:31.059 fused_ordering(731) 00:16:31.059 fused_ordering(732) 00:16:31.059 fused_ordering(733) 00:16:31.059 fused_ordering(734) 00:16:31.059 fused_ordering(735) 00:16:31.059 fused_ordering(736) 00:16:31.059 fused_ordering(737) 00:16:31.059 fused_ordering(738) 00:16:31.059 fused_ordering(739) 00:16:31.059 fused_ordering(740) 00:16:31.059 fused_ordering(741) 00:16:31.059 fused_ordering(742) 00:16:31.059 fused_ordering(743) 00:16:31.059 fused_ordering(744) 00:16:31.059 fused_ordering(745) 00:16:31.059 fused_ordering(746) 00:16:31.059 fused_ordering(747) 00:16:31.059 fused_ordering(748) 00:16:31.059 fused_ordering(749) 00:16:31.059 fused_ordering(750) 00:16:31.059 fused_ordering(751) 00:16:31.059 fused_ordering(752) 00:16:31.059 fused_ordering(753) 00:16:31.059 fused_ordering(754) 00:16:31.059 fused_ordering(755) 00:16:31.059 fused_ordering(756) 00:16:31.059 fused_ordering(757) 00:16:31.059 fused_ordering(758) 00:16:31.059 fused_ordering(759) 00:16:31.059 fused_ordering(760) 00:16:31.059 fused_ordering(761) 00:16:31.059 fused_ordering(762) 00:16:31.059 fused_ordering(763) 00:16:31.059 fused_ordering(764) 00:16:31.059 fused_ordering(765) 00:16:31.059 fused_ordering(766) 00:16:31.059 fused_ordering(767) 00:16:31.059 fused_ordering(768) 00:16:31.059 fused_ordering(769) 00:16:31.059 fused_ordering(770) 00:16:31.059 fused_ordering(771) 00:16:31.059 fused_ordering(772) 00:16:31.059 fused_ordering(773) 00:16:31.059 fused_ordering(774) 00:16:31.059 fused_ordering(775) 00:16:31.059 fused_ordering(776) 00:16:31.059 fused_ordering(777) 00:16:31.059 fused_ordering(778) 00:16:31.059 fused_ordering(779) 00:16:31.059 fused_ordering(780) 00:16:31.059 fused_ordering(781) 00:16:31.059 fused_ordering(782) 00:16:31.059 fused_ordering(783) 00:16:31.059 fused_ordering(784) 00:16:31.059 fused_ordering(785) 00:16:31.059 fused_ordering(786) 00:16:31.059 fused_ordering(787) 00:16:31.059 fused_ordering(788) 00:16:31.059 fused_ordering(789) 00:16:31.059 fused_ordering(790) 00:16:31.059 fused_ordering(791) 00:16:31.059 fused_ordering(792) 00:16:31.059 fused_ordering(793) 00:16:31.059 fused_ordering(794) 00:16:31.059 fused_ordering(795) 00:16:31.059 fused_ordering(796) 00:16:31.059 fused_ordering(797) 00:16:31.059 fused_ordering(798) 00:16:31.059 fused_ordering(799) 00:16:31.059 fused_ordering(800) 00:16:31.059 fused_ordering(801) 00:16:31.059 fused_ordering(802) 00:16:31.059 fused_ordering(803) 00:16:31.059 fused_ordering(804) 00:16:31.059 fused_ordering(805) 00:16:31.059 fused_ordering(806) 00:16:31.059 fused_ordering(807) 00:16:31.059 fused_ordering(808) 00:16:31.059 fused_ordering(809) 00:16:31.059 fused_ordering(810) 00:16:31.059 fused_ordering(811) 00:16:31.059 fused_ordering(812) 00:16:31.059 fused_ordering(813) 00:16:31.059 fused_ordering(814) 00:16:31.059 fused_ordering(815) 00:16:31.059 fused_ordering(816) 00:16:31.059 fused_ordering(817) 00:16:31.059 fused_ordering(818) 00:16:31.059 fused_ordering(819) 00:16:31.059 fused_ordering(820) 00:16:31.993 fused_ordering(821) 00:16:31.993 fused_ordering(822) 00:16:31.993 fused_ordering(823) 00:16:31.993 fused_ordering(824) 00:16:31.993 fused_ordering(825) 00:16:31.993 fused_ordering(826) 00:16:31.993 fused_ordering(827) 00:16:31.993 fused_ordering(828) 00:16:31.993 fused_ordering(829) 00:16:31.993 fused_ordering(830) 00:16:31.993 fused_ordering(831) 00:16:31.993 fused_ordering(832) 00:16:31.993 fused_ordering(833) 00:16:31.993 fused_ordering(834) 00:16:31.993 fused_ordering(835) 00:16:31.993 fused_ordering(836) 00:16:31.993 fused_ordering(837) 00:16:31.993 fused_ordering(838) 00:16:31.993 fused_ordering(839) 00:16:31.993 fused_ordering(840) 00:16:31.993 fused_ordering(841) 00:16:31.993 fused_ordering(842) 00:16:31.993 fused_ordering(843) 00:16:31.993 fused_ordering(844) 00:16:31.993 fused_ordering(845) 00:16:31.993 fused_ordering(846) 00:16:31.993 fused_ordering(847) 00:16:31.993 fused_ordering(848) 00:16:31.993 fused_ordering(849) 00:16:31.993 fused_ordering(850) 00:16:31.993 fused_ordering(851) 00:16:31.993 fused_ordering(852) 00:16:31.993 fused_ordering(853) 00:16:31.993 fused_ordering(854) 00:16:31.993 fused_ordering(855) 00:16:31.993 fused_ordering(856) 00:16:31.993 fused_ordering(857) 00:16:31.993 fused_ordering(858) 00:16:31.993 fused_ordering(859) 00:16:31.993 fused_ordering(860) 00:16:31.993 fused_ordering(861) 00:16:31.993 fused_ordering(862) 00:16:31.993 fused_ordering(863) 00:16:31.993 fused_ordering(864) 00:16:31.993 fused_ordering(865) 00:16:31.993 fused_ordering(866) 00:16:31.993 fused_ordering(867) 00:16:31.993 fused_ordering(868) 00:16:31.993 fused_ordering(869) 00:16:31.993 fused_ordering(870) 00:16:31.993 fused_ordering(871) 00:16:31.993 fused_ordering(872) 00:16:31.993 fused_ordering(873) 00:16:31.993 fused_ordering(874) 00:16:31.993 fused_ordering(875) 00:16:31.993 fused_ordering(876) 00:16:31.993 fused_ordering(877) 00:16:31.993 fused_ordering(878) 00:16:31.993 fused_ordering(879) 00:16:31.993 fused_ordering(880) 00:16:31.993 fused_ordering(881) 00:16:31.993 fused_ordering(882) 00:16:31.993 fused_ordering(883) 00:16:31.993 fused_ordering(884) 00:16:31.993 fused_ordering(885) 00:16:31.993 fused_ordering(886) 00:16:31.993 fused_ordering(887) 00:16:31.993 fused_ordering(888) 00:16:31.993 fused_ordering(889) 00:16:31.993 fused_ordering(890) 00:16:31.993 fused_ordering(891) 00:16:31.993 fused_ordering(892) 00:16:31.993 fused_ordering(893) 00:16:31.993 fused_ordering(894) 00:16:31.993 fused_ordering(895) 00:16:31.993 fused_ordering(896) 00:16:31.993 fused_ordering(897) 00:16:31.993 fused_ordering(898) 00:16:31.993 fused_ordering(899) 00:16:31.993 fused_ordering(900) 00:16:31.993 fused_ordering(901) 00:16:31.993 fused_ordering(902) 00:16:31.993 fused_ordering(903) 00:16:31.993 fused_ordering(904) 00:16:31.993 fused_ordering(905) 00:16:31.993 fused_ordering(906) 00:16:31.993 fused_ordering(907) 00:16:31.993 fused_ordering(908) 00:16:31.993 fused_ordering(909) 00:16:31.993 fused_ordering(910) 00:16:31.993 fused_ordering(911) 00:16:31.993 fused_ordering(912) 00:16:31.993 fused_ordering(913) 00:16:31.993 fused_ordering(914) 00:16:31.993 fused_ordering(915) 00:16:31.993 fused_ordering(916) 00:16:31.993 fused_ordering(917) 00:16:31.993 fused_ordering(918) 00:16:31.993 fused_ordering(919) 00:16:31.993 fused_ordering(920) 00:16:31.993 fused_ordering(921) 00:16:31.993 fused_ordering(922) 00:16:31.993 fused_ordering(923) 00:16:31.993 fused_ordering(924) 00:16:31.993 fused_ordering(925) 00:16:31.993 fused_ordering(926) 00:16:31.993 fused_ordering(927) 00:16:31.993 fused_ordering(928) 00:16:31.993 fused_ordering(929) 00:16:31.993 fused_ordering(930) 00:16:31.993 fused_ordering(931) 00:16:31.993 fused_ordering(932) 00:16:31.993 fused_ordering(933) 00:16:31.993 fused_ordering(934) 00:16:31.993 fused_ordering(935) 00:16:31.993 fused_ordering(936) 00:16:31.993 fused_ordering(937) 00:16:31.993 fused_ordering(938) 00:16:31.993 fused_ordering(939) 00:16:31.993 fused_ordering(940) 00:16:31.993 fused_ordering(941) 00:16:31.993 fused_ordering(942) 00:16:31.993 fused_ordering(943) 00:16:31.993 fused_ordering(944) 00:16:31.993 fused_ordering(945) 00:16:31.993 fused_ordering(946) 00:16:31.993 fused_ordering(947) 00:16:31.993 fused_ordering(948) 00:16:31.993 fused_ordering(949) 00:16:31.993 fused_ordering(950) 00:16:31.993 fused_ordering(951) 00:16:31.993 fused_ordering(952) 00:16:31.993 fused_ordering(953) 00:16:31.993 fused_ordering(954) 00:16:31.993 fused_ordering(955) 00:16:31.993 fused_ordering(956) 00:16:31.993 fused_ordering(957) 00:16:31.993 fused_ordering(958) 00:16:31.993 fused_ordering(959) 00:16:31.993 fused_ordering(960) 00:16:31.993 fused_ordering(961) 00:16:31.993 fused_ordering(962) 00:16:31.993 fused_ordering(963) 00:16:31.993 fused_ordering(964) 00:16:31.993 fused_ordering(965) 00:16:31.993 fused_ordering(966) 00:16:31.993 fused_ordering(967) 00:16:31.993 fused_ordering(968) 00:16:31.993 fused_ordering(969) 00:16:31.993 fused_ordering(970) 00:16:31.993 fused_ordering(971) 00:16:31.993 fused_ordering(972) 00:16:31.993 fused_ordering(973) 00:16:31.993 fused_ordering(974) 00:16:31.993 fused_ordering(975) 00:16:31.993 fused_ordering(976) 00:16:31.993 fused_ordering(977) 00:16:31.993 fused_ordering(978) 00:16:31.993 fused_ordering(979) 00:16:31.994 fused_ordering(980) 00:16:31.994 fused_ordering(981) 00:16:31.994 fused_ordering(982) 00:16:31.994 fused_ordering(983) 00:16:31.994 fused_ordering(984) 00:16:31.994 fused_ordering(985) 00:16:31.994 fused_ordering(986) 00:16:31.994 fused_ordering(987) 00:16:31.994 fused_ordering(988) 00:16:31.994 fused_ordering(989) 00:16:31.994 fused_ordering(990) 00:16:31.994 fused_ordering(991) 00:16:31.994 fused_ordering(992) 00:16:31.994 fused_ordering(993) 00:16:31.994 fused_ordering(994) 00:16:31.994 fused_ordering(995) 00:16:31.994 fused_ordering(996) 00:16:31.994 fused_ordering(997) 00:16:31.994 fused_ordering(998) 00:16:31.994 fused_ordering(999) 00:16:31.994 fused_ordering(1000) 00:16:31.994 fused_ordering(1001) 00:16:31.994 fused_ordering(1002) 00:16:31.994 fused_ordering(1003) 00:16:31.994 fused_ordering(1004) 00:16:31.994 fused_ordering(1005) 00:16:31.994 fused_ordering(1006) 00:16:31.994 fused_ordering(1007) 00:16:31.994 fused_ordering(1008) 00:16:31.994 fused_ordering(1009) 00:16:31.994 fused_ordering(1010) 00:16:31.994 fused_ordering(1011) 00:16:31.994 fused_ordering(1012) 00:16:31.994 fused_ordering(1013) 00:16:31.994 fused_ordering(1014) 00:16:31.994 fused_ordering(1015) 00:16:31.994 fused_ordering(1016) 00:16:31.994 fused_ordering(1017) 00:16:31.994 fused_ordering(1018) 00:16:31.994 fused_ordering(1019) 00:16:31.994 fused_ordering(1020) 00:16:31.994 fused_ordering(1021) 00:16:31.994 fused_ordering(1022) 00:16:31.994 fused_ordering(1023) 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:31.994 rmmod nvme_tcp 00:16:31.994 rmmod nvme_fabrics 00:16:31.994 rmmod nvme_keyring 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 2297349 ']' 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 2297349 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2297349 ']' 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2297349 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2297349 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2297349' 00:16:31.994 killing process with pid 2297349 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2297349 00:16:31.994 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2297349 00:16:32.253 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:32.253 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:32.253 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:32.253 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:16:32.253 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:16:32.253 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:32.253 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:16:32.253 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:32.253 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:32.253 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.253 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:32.254 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.159 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:34.159 00:16:34.159 real 0m8.728s 00:16:34.159 user 0m6.447s 00:16:34.159 sys 0m3.576s 00:16:34.159 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:34.159 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:34.159 ************************************ 00:16:34.159 END TEST nvmf_fused_ordering 00:16:34.159 ************************************ 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:34.418 ************************************ 00:16:34.418 START TEST nvmf_ns_masking 00:16:34.418 ************************************ 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:34.418 * Looking for test storage... 00:16:34.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1689 -- # lcov --version 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:16:34.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.418 --rc genhtml_branch_coverage=1 00:16:34.418 --rc genhtml_function_coverage=1 00:16:34.418 --rc genhtml_legend=1 00:16:34.418 --rc geninfo_all_blocks=1 00:16:34.418 --rc geninfo_unexecuted_blocks=1 00:16:34.418 00:16:34.418 ' 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:16:34.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.418 --rc genhtml_branch_coverage=1 00:16:34.418 --rc genhtml_function_coverage=1 00:16:34.418 --rc genhtml_legend=1 00:16:34.418 --rc geninfo_all_blocks=1 00:16:34.418 --rc geninfo_unexecuted_blocks=1 00:16:34.418 00:16:34.418 ' 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:16:34.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.418 --rc genhtml_branch_coverage=1 00:16:34.418 --rc genhtml_function_coverage=1 00:16:34.418 --rc genhtml_legend=1 00:16:34.418 --rc geninfo_all_blocks=1 00:16:34.418 --rc geninfo_unexecuted_blocks=1 00:16:34.418 00:16:34.418 ' 00:16:34.418 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:16:34.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.418 --rc genhtml_branch_coverage=1 00:16:34.418 --rc genhtml_function_coverage=1 00:16:34.418 --rc genhtml_legend=1 00:16:34.418 --rc geninfo_all_blocks=1 00:16:34.418 --rc geninfo_unexecuted_blocks=1 00:16:34.418 00:16:34.418 ' 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:34.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3f1f8184-82c9-42d3-a83a-5648eb3a55ec 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=e386e9ce-22c0-436e-a97f-bf885c64149c 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6dc8e3e3-293c-4e10-985b-f09b5dcefea3 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:16:34.419 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:36.951 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:36.951 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:36.951 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:36.951 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:36.951 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:36.952 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:36.952 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.952 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.952 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.952 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.952 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:36.952 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.952 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.952 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:36.952 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:36.952 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.952 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.952 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:36.952 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:36.952 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.952 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:36.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:16:36.952 00:16:36.952 --- 10.0.0.2 ping statistics --- 00:16:36.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.952 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:16:36.952 00:16:36.952 --- 10.0.0.1 ping statistics --- 00:16:36.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.952 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=2299817 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 2299817 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2299817 ']' 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:36.952 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:36.952 [2024-10-28 04:52:27.203076] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:16:36.952 [2024-10-28 04:52:27.203171] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.952 [2024-10-28 04:52:27.349304] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:36.952 [2024-10-28 04:52:27.385018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.952 [2024-10-28 04:52:27.432943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.952 [2024-10-28 04:52:27.433019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.952 [2024-10-28 04:52:27.433045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.952 [2024-10-28 04:52:27.433067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.952 [2024-10-28 04:52:27.433084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.952 [2024-10-28 04:52:27.433834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.886 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:37.886 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:37.886 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:37.886 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:37.886 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:37.886 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.886 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:38.145 [2024-10-28 04:52:28.568598] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.145 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:38.145 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:38.145 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:38.403 Malloc1 00:16:38.403 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:38.662 Malloc2 00:16:38.662 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:39.236 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:39.236 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:39.805 [2024-10-28 04:52:30.095899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.805 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:39.805 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6dc8e3e3-293c-4e10-985b-f09b5dcefea3 -a 10.0.0.2 -s 4420 -i 4 00:16:39.805 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:39.805 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:39.805 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:39.805 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:39.805 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:41.706 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:41.706 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:41.706 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:41.706 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:41.706 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:41.706 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:41.706 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:41.706 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:41.706 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:41.706 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:41.706 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:41.964 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.964 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:41.964 [ 0]:0x1 00:16:41.964 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:41.964 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.964 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=29bae3f9088141da8515e984523a3991 00:16:41.964 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 29bae3f9088141da8515e984523a3991 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.964 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:42.222 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:42.222 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:42.222 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:42.222 [ 0]:0x1 00:16:42.222 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:42.222 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:42.223 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=29bae3f9088141da8515e984523a3991 00:16:42.223 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 29bae3f9088141da8515e984523a3991 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:42.223 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:42.223 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:42.223 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:42.223 [ 1]:0x2 00:16:42.223 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:42.223 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:42.223 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3002be11481f4deb95905399310d673a 00:16:42.223 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3002be11481f4deb95905399310d673a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:42.223 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:42.223 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:42.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.481 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:42.739 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:42.997 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:42.997 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6dc8e3e3-293c-4e10-985b-f09b5dcefea3 -a 10.0.0.2 -s 4420 -i 4 00:16:43.256 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:43.256 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:43.256 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.256 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:43.256 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:43.256 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:45.156 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:45.156 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:45.156 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:45.156 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:45.156 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:45.156 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:45.156 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:45.156 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:45.414 [ 0]:0x2 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3002be11481f4deb95905399310d673a 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3002be11481f4deb95905399310d673a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.414 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:45.980 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:45.980 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:45.980 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:45.980 [ 0]:0x1 00:16:45.980 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:45.980 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:45.980 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=29bae3f9088141da8515e984523a3991 00:16:45.980 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 29bae3f9088141da8515e984523a3991 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.980 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:45.980 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:45.980 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:45.980 [ 1]:0x2 00:16:45.980 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:45.980 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:45.980 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3002be11481f4deb95905399310d673a 00:16:45.980 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3002be11481f4deb95905399310d673a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.980 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:46.238 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:46.239 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:46.239 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:46.239 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:46.239 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:46.239 [ 0]:0x2 00:16:46.239 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:46.239 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:46.239 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3002be11481f4deb95905399310d673a 00:16:46.239 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3002be11481f4deb95905399310d673a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:46.239 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:46.239 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.496 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:46.755 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:46.755 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6dc8e3e3-293c-4e10-985b-f09b5dcefea3 -a 10.0.0.2 -s 4420 -i 4 00:16:46.755 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:46.755 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:46.755 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.755 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:46.755 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:46.755 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:49.283 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:49.283 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:49.283 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.283 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:49.283 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.283 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:49.283 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:49.283 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:49.283 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:49.283 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:49.283 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:49.283 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:49.283 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:49.283 [ 0]:0x1 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=29bae3f9088141da8515e984523a3991 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 29bae3f9088141da8515e984523a3991 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:49.284 [ 1]:0x2 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3002be11481f4deb95905399310d673a 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3002be11481f4deb95905399310d673a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:49.284 [ 0]:0x2 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3002be11481f4deb95905399310d673a 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3002be11481f4deb95905399310d673a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:49.284 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:49.542 [2024-10-28 04:52:40.098507] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:49.542 request: 00:16:49.542 { 00:16:49.542 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.542 "nsid": 2, 00:16:49.542 "host": "nqn.2016-06.io.spdk:host1", 00:16:49.542 "method": "nvmf_ns_remove_host", 00:16:49.542 "req_id": 1 00:16:49.542 } 00:16:49.542 Got JSON-RPC error response 00:16:49.542 response: 00:16:49.542 { 00:16:49.542 "code": -32602, 00:16:49.542 "message": "Invalid parameters" 00:16:49.542 } 00:16:49.542 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:49.542 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:49.542 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:49.542 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:49.542 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:49.542 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:49.542 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:49.542 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:49.542 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:49.542 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:49.542 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:49.542 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:49.542 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:49.542 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:49.799 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:49.799 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:49.799 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:49.799 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:49.800 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:49.800 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:49.800 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:49.800 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:49.800 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:49.800 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:49.800 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:49.800 [ 0]:0x2 00:16:49.800 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:49.800 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:49.800 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3002be11481f4deb95905399310d673a 00:16:49.800 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3002be11481f4deb95905399310d673a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:49.800 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:49.800 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:50.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.058 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2301525 00:16:50.058 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:50.058 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.058 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2301525 /var/tmp/host.sock 00:16:50.058 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2301525 ']' 00:16:50.058 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:50.058 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:50.058 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:50.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:50.058 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:50.058 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:50.058 [2024-10-28 04:52:40.497127] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:16:50.058 [2024-10-28 04:52:40.497210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2301525 ] 00:16:50.058 [2024-10-28 04:52:40.629018] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:50.316 [2024-10-28 04:52:40.671120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.316 [2024-10-28 04:52:40.722269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.250 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:51.250 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:51.250 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:51.509 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:51.767 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3f1f8184-82c9-42d3-a83a-5648eb3a55ec 00:16:51.767 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:16:51.767 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3F1F818482C942D3A83A5648EB3A55EC -i 00:16:52.024 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid e386e9ce-22c0-436e-a97f-bf885c64149c 00:16:52.024 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:16:52.024 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g E386E9CE22C0436EA97FBF885C64149C -i 00:16:52.283 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:52.541 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:52.851 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:52.851 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:53.152 nvme0n1 00:16:53.152 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:53.152 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:53.719 nvme1n2 00:16:53.719 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:53.719 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:53.719 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:53.719 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:53.719 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:53.978 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:53.978 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:53.978 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:53.978 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:54.237 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3f1f8184-82c9-42d3-a83a-5648eb3a55ec == \3\f\1\f\8\1\8\4\-\8\2\c\9\-\4\2\d\3\-\a\8\3\a\-\5\6\4\8\e\b\3\a\5\5\e\c ]] 00:16:54.237 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:54.237 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:54.237 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:54.495 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ e386e9ce-22c0-436e-a97f-bf885c64149c == \e\3\8\6\e\9\c\e\-\2\2\c\0\-\4\3\6\e\-\a\9\7\f\-\b\f\8\8\5\c\6\4\1\4\9\c ]] 00:16:54.495 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.062 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:55.321 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 3f1f8184-82c9-42d3-a83a-5648eb3a55ec 00:16:55.321 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:16:55.321 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3F1F818482C942D3A83A5648EB3A55EC 00:16:55.321 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:55.321 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3F1F818482C942D3A83A5648EB3A55EC 00:16:55.321 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:55.321 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.321 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:55.321 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.321 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:55.321 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.321 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:55.321 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:55.321 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3F1F818482C942D3A83A5648EB3A55EC 00:16:55.579 [2024-10-28 04:52:46.001740] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:16:55.579 [2024-10-28 04:52:46.001783] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:16:55.579 [2024-10-28 04:52:46.001808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.579 request: 00:16:55.579 { 00:16:55.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.579 "namespace": { 00:16:55.579 "bdev_name": "invalid", 00:16:55.579 "nsid": 1, 00:16:55.579 "nguid": "3F1F818482C942D3A83A5648EB3A55EC", 00:16:55.579 "no_auto_visible": false 00:16:55.579 }, 00:16:55.579 "method": "nvmf_subsystem_add_ns", 00:16:55.579 "req_id": 1 00:16:55.579 } 00:16:55.579 Got JSON-RPC error response 00:16:55.579 response: 00:16:55.579 { 00:16:55.579 "code": -32602, 00:16:55.579 "message": "Invalid parameters" 00:16:55.579 } 00:16:55.579 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:55.579 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:55.579 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:55.579 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:55.579 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 3f1f8184-82c9-42d3-a83a-5648eb3a55ec 00:16:55.579 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:16:55.579 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3F1F818482C942D3A83A5648EB3A55EC -i 00:16:55.837 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:16:57.737 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:16:57.737 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:16:57.737 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:57.996 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:16:57.996 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2301525 00:16:57.996 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2301525 ']' 00:16:57.996 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2301525 00:16:57.996 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:57.996 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:57.996 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2301525 00:16:58.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:58.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:58.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2301525' 00:16:58.254 killing process with pid 2301525 00:16:58.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2301525 00:16:58.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2301525 00:16:58.512 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:58.770 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:58.770 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:16:58.770 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:58.770 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:16:58.770 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:58.770 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:16:58.770 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:58.770 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:58.770 rmmod nvme_tcp 00:16:59.028 rmmod nvme_fabrics 00:16:59.028 rmmod nvme_keyring 00:16:59.028 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:59.028 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:16:59.028 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:16:59.028 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 2299817 ']' 00:16:59.028 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 2299817 00:16:59.028 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2299817 ']' 00:16:59.028 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2299817 00:16:59.028 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:59.028 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:59.028 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2299817 00:16:59.028 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:59.028 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:59.028 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2299817' 00:16:59.028 killing process with pid 2299817 00:16:59.028 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2299817 00:16:59.028 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2299817 00:16:59.287 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:59.287 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:59.287 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:59.287 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:16:59.287 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:16:59.287 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:59.287 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:16:59.287 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:59.287 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:59.287 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.287 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.287 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.192 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:01.451 00:17:01.451 real 0m26.995s 00:17:01.451 user 0m40.209s 00:17:01.452 sys 0m4.700s 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:01.452 ************************************ 00:17:01.452 END TEST nvmf_ns_masking 00:17:01.452 ************************************ 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:01.452 ************************************ 00:17:01.452 START TEST nvmf_nvme_cli 00:17:01.452 ************************************ 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:01.452 * Looking for test storage... 00:17:01.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1689 -- # lcov --version 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.452 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:17:01.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.452 --rc genhtml_branch_coverage=1 00:17:01.452 --rc genhtml_function_coverage=1 00:17:01.452 --rc genhtml_legend=1 00:17:01.452 --rc geninfo_all_blocks=1 00:17:01.452 --rc geninfo_unexecuted_blocks=1 00:17:01.452 00:17:01.452 ' 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:17:01.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.452 --rc genhtml_branch_coverage=1 00:17:01.452 --rc genhtml_function_coverage=1 00:17:01.452 --rc genhtml_legend=1 00:17:01.452 --rc geninfo_all_blocks=1 00:17:01.452 --rc geninfo_unexecuted_blocks=1 00:17:01.452 00:17:01.452 ' 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:17:01.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.452 --rc genhtml_branch_coverage=1 00:17:01.452 --rc genhtml_function_coverage=1 00:17:01.452 --rc genhtml_legend=1 00:17:01.452 --rc geninfo_all_blocks=1 00:17:01.452 --rc geninfo_unexecuted_blocks=1 00:17:01.452 00:17:01.452 ' 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:17:01.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.452 --rc genhtml_branch_coverage=1 00:17:01.452 --rc genhtml_function_coverage=1 00:17:01.452 --rc genhtml_legend=1 00:17:01.452 --rc geninfo_all_blocks=1 00:17:01.452 --rc geninfo_unexecuted_blocks=1 00:17:01.452 00:17:01.452 ' 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.452 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:01.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:01.453 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:03.987 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:03.987 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.987 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:03.988 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:03.988 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:03.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:17:03.988 00:17:03.988 --- 10.0.0.2 ping statistics --- 00:17:03.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.988 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:03.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:17:03.988 00:17:03.988 --- 10.0.0.1 ping statistics --- 00:17:03.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.988 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=2304527 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 2304527 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2304527 ']' 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:03.988 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:03.988 [2024-10-28 04:52:54.242599] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:17:03.988 [2024-10-28 04:52:54.242723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.988 [2024-10-28 04:52:54.383587] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:03.988 [2024-10-28 04:52:54.425803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.988 [2024-10-28 04:52:54.478928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.988 [2024-10-28 04:52:54.479014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.988 [2024-10-28 04:52:54.479040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.988 [2024-10-28 04:52:54.479060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.988 [2024-10-28 04:52:54.479079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.988 [2024-10-28 04:52:54.480898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.988 [2024-10-28 04:52:54.480961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.988 [2024-10-28 04:52:54.481017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.988 [2024-10-28 04:52:54.481021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:04.922 [2024-10-28 04:52:55.299415] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:04.922 Malloc0 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.922 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:04.922 Malloc1 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:04.923 [2024-10-28 04:52:55.393264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.923 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:05.181 00:17:05.181 Discovery Log Number of Records 2, Generation counter 2 00:17:05.181 =====Discovery Log Entry 0====== 00:17:05.181 trtype: tcp 00:17:05.181 adrfam: ipv4 00:17:05.181 subtype: current discovery subsystem 00:17:05.181 treq: not required 00:17:05.181 portid: 0 00:17:05.181 trsvcid: 4420 00:17:05.181 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:05.181 traddr: 10.0.0.2 00:17:05.181 eflags: explicit discovery connections, duplicate discovery information 00:17:05.181 sectype: none 00:17:05.181 =====Discovery Log Entry 1====== 00:17:05.181 trtype: tcp 00:17:05.181 adrfam: ipv4 00:17:05.181 subtype: nvme subsystem 00:17:05.181 treq: not required 00:17:05.181 portid: 0 00:17:05.181 trsvcid: 4420 00:17:05.181 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:05.181 traddr: 10.0.0.2 00:17:05.181 eflags: none 00:17:05.181 sectype: none 00:17:05.181 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:05.181 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:05.181 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:17:05.181 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:05.181 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:17:05.181 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:17:05.181 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:05.181 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:17:05.181 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:05.181 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:05.181 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:05.746 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:05.746 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:17:05.746 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:05.746 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:05.746 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:05.746 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:08.274 /dev/nvme0n2 ]] 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:08.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:08.274 rmmod nvme_tcp 00:17:08.274 rmmod nvme_fabrics 00:17:08.274 rmmod nvme_keyring 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 2304527 ']' 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 2304527 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2304527 ']' 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2304527 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:08.274 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2304527 00:17:08.533 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:08.533 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:08.533 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2304527' 00:17:08.533 killing process with pid 2304527 00:17:08.533 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2304527 00:17:08.533 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2304527 00:17:08.792 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:08.792 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:08.792 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:08.792 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:08.792 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:17:08.792 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:08.792 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:17:08.792 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:08.792 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:08.792 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.792 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.792 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.696 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:10.696 00:17:10.696 real 0m9.364s 00:17:10.696 user 0m19.414s 00:17:10.696 sys 0m2.298s 00:17:10.696 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:10.696 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:10.696 ************************************ 00:17:10.696 END TEST nvmf_nvme_cli 00:17:10.696 ************************************ 00:17:10.696 04:53:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:10.696 04:53:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:10.696 04:53:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:10.696 04:53:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:10.696 04:53:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.696 ************************************ 00:17:10.696 START TEST nvmf_vfio_user 00:17:10.696 ************************************ 00:17:10.696 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:10.955 * Looking for test storage... 00:17:10.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1689 -- # lcov --version 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:17:10.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.956 --rc genhtml_branch_coverage=1 00:17:10.956 --rc genhtml_function_coverage=1 00:17:10.956 --rc genhtml_legend=1 00:17:10.956 --rc geninfo_all_blocks=1 00:17:10.956 --rc geninfo_unexecuted_blocks=1 00:17:10.956 00:17:10.956 ' 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:17:10.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.956 --rc genhtml_branch_coverage=1 00:17:10.956 --rc genhtml_function_coverage=1 00:17:10.956 --rc genhtml_legend=1 00:17:10.956 --rc geninfo_all_blocks=1 00:17:10.956 --rc geninfo_unexecuted_blocks=1 00:17:10.956 00:17:10.956 ' 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:17:10.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.956 --rc genhtml_branch_coverage=1 00:17:10.956 --rc genhtml_function_coverage=1 00:17:10.956 --rc genhtml_legend=1 00:17:10.956 --rc geninfo_all_blocks=1 00:17:10.956 --rc geninfo_unexecuted_blocks=1 00:17:10.956 00:17:10.956 ' 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:17:10.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.956 --rc genhtml_branch_coverage=1 00:17:10.956 --rc genhtml_function_coverage=1 00:17:10.956 --rc genhtml_legend=1 00:17:10.956 --rc geninfo_all_blocks=1 00:17:10.956 --rc geninfo_unexecuted_blocks=1 00:17:10.956 00:17:10.956 ' 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.956 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:10.957 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:10.957 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:10.957 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:10.957 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:10.957 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:10.957 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2305458 00:17:10.957 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:10.957 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2305458' 00:17:10.957 Process pid: 2305458 00:17:10.957 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:10.957 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2305458 00:17:10.957 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2305458 ']' 00:17:10.957 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.957 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:10.957 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.957 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:10.957 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:10.957 [2024-10-28 04:53:01.472132] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:17:10.957 [2024-10-28 04:53:01.472222] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.215 [2024-10-28 04:53:01.604433] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:11.215 [2024-10-28 04:53:01.638975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:11.215 [2024-10-28 04:53:01.686003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.215 [2024-10-28 04:53:01.686046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.215 [2024-10-28 04:53:01.686068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.215 [2024-10-28 04:53:01.686087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.215 [2024-10-28 04:53:01.686102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.215 [2024-10-28 04:53:01.687755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.215 [2024-10-28 04:53:01.687783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.215 [2024-10-28 04:53:01.687815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:11.215 [2024-10-28 04:53:01.687818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.146 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:12.146 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:12.146 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:13.077 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:13.335 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:13.335 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:13.335 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:13.335 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:13.335 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:13.592 Malloc1 00:17:13.592 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:13.850 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:14.414 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:14.672 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:14.672 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:14.672 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:14.930 Malloc2 00:17:14.930 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:15.187 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:15.752 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:16.012 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:16.012 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:16.012 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:16.012 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:16.012 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:16.012 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:16.012 [2024-10-28 04:53:06.407502] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:17:16.012 [2024-10-28 04:53:06.407537] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306116 ] 00:17:16.012 [2024-10-28 04:53:06.524281] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:16.012 [2024-10-28 04:53:06.558669] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:16.012 [2024-10-28 04:53:06.564083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:16.012 [2024-10-28 04:53:06.564111] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f553635b000 00:17:16.012 [2024-10-28 04:53:06.565081] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:16.012 [2024-10-28 04:53:06.566070] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:16.012 [2024-10-28 04:53:06.567068] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:16.012 [2024-10-28 04:53:06.568074] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:16.012 [2024-10-28 04:53:06.569078] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:16.012 [2024-10-28 04:53:06.570079] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:16.012 [2024-10-28 04:53:06.571086] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:16.012 [2024-10-28 04:53:06.572085] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:16.012 [2024-10-28 04:53:06.573094] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:16.012 [2024-10-28 04:53:06.573114] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f553505b000 00:17:16.012 [2024-10-28 04:53:06.574260] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:16.012 [2024-10-28 04:53:06.589874] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:16.012 [2024-10-28 04:53:06.589913] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:16.012 [2024-10-28 04:53:06.592158] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:16.012 [2024-10-28 04:53:06.592212] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:16.012 [2024-10-28 04:53:06.592302] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:16.012 [2024-10-28 04:53:06.592331] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:16.012 [2024-10-28 04:53:06.592342] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:16.012 [2024-10-28 04:53:06.593149] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:16.012 [2024-10-28 04:53:06.593168] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:16.012 [2024-10-28 04:53:06.593180] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:16.012 [2024-10-28 04:53:06.594148] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:16.012 [2024-10-28 04:53:06.594168] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:16.012 [2024-10-28 04:53:06.594186] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:16.012 [2024-10-28 04:53:06.595156] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:16.012 [2024-10-28 04:53:06.595177] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:16.012 [2024-10-28 04:53:06.596158] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:16.012 [2024-10-28 04:53:06.596177] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:16.012 [2024-10-28 04:53:06.596187] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:16.012 [2024-10-28 04:53:06.596198] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:16.012 [2024-10-28 04:53:06.596307] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:16.012 [2024-10-28 04:53:06.596315] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:16.013 [2024-10-28 04:53:06.596323] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003a0000 00:17:16.013 [2024-10-28 04:53:06.597167] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003a6000 00:17:16.013 [2024-10-28 04:53:06.598168] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:16.013 [2024-10-28 04:53:06.599168] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:16.013 [2024-10-28 04:53:06.600166] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:16.013 [2024-10-28 04:53:06.600259] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:16.013 [2024-10-28 04:53:06.601180] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:16.013 [2024-10-28 04:53:06.601216] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:16.013 [2024-10-28 04:53:06.601226] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.601250] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:16.013 [2024-10-28 04:53:06.601267] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.601289] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002dd000 len:4096 00:17:16.013 [2024-10-28 04:53:06.601298] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002dd000 00:17:16.013 [2024-10-28 04:53:06.601305] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:16.013 [2024-10-28 04:53:06.601322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002dd000 PRP2 0x0 00:17:16.013 [2024-10-28 04:53:06.601376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:16.013 [2024-10-28 04:53:06.601395] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:16.013 [2024-10-28 04:53:06.601404] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:16.013 [2024-10-28 04:53:06.601410] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:16.013 [2024-10-28 04:53:06.601418] nvme_ctrlr.c:2072:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:16.013 [2024-10-28 04:53:06.601425] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:16.013 [2024-10-28 04:53:06.601433] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:16.013 [2024-10-28 04:53:06.601440] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.601452] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.601466] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:16.013 [2024-10-28 04:53:06.601482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:16.013 [2024-10-28 04:53:06.601502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.013 [2024-10-28 04:53:06.601515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.013 [2024-10-28 04:53:06.601527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.013 [2024-10-28 04:53:06.601538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.013 [2024-10-28 04:53:06.601546] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.601557] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.601569] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:16.013 [2024-10-28 04:53:06.601581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:16.013 [2024-10-28 04:53:06.601594] nvme_ctrlr.c:3011:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:16.013 [2024-10-28 04:53:06.601603] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.601613] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.601648] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.601664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:16.013 [2024-10-28 04:53:06.601679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:16.013 [2024-10-28 04:53:06.601744] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.601764] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.601778] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:4096 00:17:16.013 [2024-10-28 04:53:06.601786] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:17:16.013 [2024-10-28 04:53:06.601792] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:16.013 [2024-10-28 04:53:06.601801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:17:16.013 [2024-10-28 04:53:06.601816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:16.013 [2024-10-28 04:53:06.601832] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:16.013 [2024-10-28 04:53:06.601852] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.601866] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.601878] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002dd000 len:4096 00:17:16.013 [2024-10-28 04:53:06.601886] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002dd000 00:17:16.013 [2024-10-28 04:53:06.601892] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:16.013 [2024-10-28 04:53:06.601901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002dd000 PRP2 0x0 00:17:16.013 [2024-10-28 04:53:06.601926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:16.013 [2024-10-28 04:53:06.601961] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.601977] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.601990] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002dd000 len:4096 00:17:16.013 [2024-10-28 04:53:06.601998] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002dd000 00:17:16.013 [2024-10-28 04:53:06.602004] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:16.013 [2024-10-28 04:53:06.602013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002dd000 PRP2 0x0 00:17:16.013 [2024-10-28 04:53:06.602027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:16.013 [2024-10-28 04:53:06.602042] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.602053] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.602067] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.602078] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.602087] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.602100] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.602109] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:16.013 [2024-10-28 04:53:06.602117] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:16.013 [2024-10-28 04:53:06.602125] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:16.013 [2024-10-28 04:53:06.602150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:16.013 [2024-10-28 04:53:06.602169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:16.013 [2024-10-28 04:53:06.602188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:16.013 [2024-10-28 04:53:06.602200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:16.013 [2024-10-28 04:53:06.602216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:16.013 [2024-10-28 04:53:06.602231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:16.013 [2024-10-28 04:53:06.602247] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:16.013 [2024-10-28 04:53:06.602262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:16.013 [2024-10-28 04:53:06.602284] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002d8000 len:8192 00:17:16.013 [2024-10-28 04:53:06.602295] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002d8000 00:17:16.013 [2024-10-28 04:53:06.602301] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002d9000 00:17:16.014 [2024-10-28 04:53:06.602307] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002d9000 00:17:16.014 [2024-10-28 04:53:06.602313] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:16.014 [2024-10-28 04:53:06.602323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002d8000 PRP2 0x2000002d9000 00:17:16.014 [2024-10-28 04:53:06.602335] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002de000 len:512 00:17:16.014 [2024-10-28 04:53:06.602343] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002de000 00:17:16.014 [2024-10-28 04:53:06.602349] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:16.014 [2024-10-28 04:53:06.602358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002de000 PRP2 0x0 00:17:16.014 [2024-10-28 04:53:06.602369] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002dd000 len:512 00:17:16.014 [2024-10-28 04:53:06.602377] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002dd000 00:17:16.014 [2024-10-28 04:53:06.602383] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:16.014 [2024-10-28 04:53:06.602392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002dd000 PRP2 0x0 00:17:16.014 [2024-10-28 04:53:06.602408] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002d6000 len:4096 00:17:16.014 [2024-10-28 04:53:06.602422] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002d6000 00:17:16.014 [2024-10-28 04:53:06.602429] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:16.014 [2024-10-28 04:53:06.602438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002d6000 PRP2 0x0 00:17:16.014 [2024-10-28 04:53:06.602451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:16.014 [2024-10-28 04:53:06.602470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:16.014 [2024-10-28 04:53:06.602489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:16.014 [2024-10-28 04:53:06.602501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:16.014 ===================================================== 00:17:16.014 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:16.014 ===================================================== 00:17:16.014 Controller Capabilities/Features 00:17:16.014 ================================ 00:17:16.014 Vendor ID: 4e58 00:17:16.014 Subsystem Vendor ID: 4e58 00:17:16.014 Serial Number: SPDK1 00:17:16.014 Model Number: SPDK bdev Controller 00:17:16.014 Firmware Version: 25.01 00:17:16.014 Recommended Arb Burst: 6 00:17:16.014 IEEE OUI Identifier: 8d 6b 50 00:17:16.014 Multi-path I/O 00:17:16.014 May have multiple subsystem ports: Yes 00:17:16.014 May have multiple controllers: Yes 00:17:16.014 Associated with SR-IOV VF: No 00:17:16.014 Max Data Transfer Size: 131072 00:17:16.014 Max Number of Namespaces: 32 00:17:16.014 Max Number of I/O Queues: 127 00:17:16.014 NVMe Specification Version (VS): 1.3 00:17:16.014 NVMe Specification Version (Identify): 1.3 00:17:16.014 Maximum Queue Entries: 256 00:17:16.014 Contiguous Queues Required: Yes 00:17:16.014 Arbitration Mechanisms Supported 00:17:16.014 Weighted Round Robin: Not Supported 00:17:16.014 Vendor Specific: Not Supported 00:17:16.014 Reset Timeout: 15000 ms 00:17:16.014 Doorbell Stride: 4 bytes 00:17:16.014 NVM Subsystem Reset: Not Supported 00:17:16.014 Command Sets Supported 00:17:16.014 NVM Command Set: Supported 00:17:16.014 Boot Partition: Not Supported 00:17:16.014 Memory Page Size Minimum: 4096 bytes 00:17:16.014 Memory Page Size Maximum: 4096 bytes 00:17:16.014 Persistent Memory Region: Not Supported 00:17:16.014 Optional Asynchronous Events Supported 00:17:16.014 Namespace Attribute Notices: Supported 00:17:16.014 Firmware Activation Notices: Not Supported 00:17:16.014 ANA Change Notices: Not Supported 00:17:16.014 PLE Aggregate Log Change Notices: Not Supported 00:17:16.014 LBA Status Info Alert Notices: Not Supported 00:17:16.014 EGE Aggregate Log Change Notices: Not Supported 00:17:16.014 Normal NVM Subsystem Shutdown event: Not Supported 00:17:16.014 Zone Descriptor Change Notices: Not Supported 00:17:16.014 Discovery Log Change Notices: Not Supported 00:17:16.014 Controller Attributes 00:17:16.014 128-bit Host Identifier: Supported 00:17:16.014 Non-Operational Permissive Mode: Not Supported 00:17:16.014 NVM Sets: Not Supported 00:17:16.014 Read Recovery Levels: Not Supported 00:17:16.014 Endurance Groups: Not Supported 00:17:16.014 Predictable Latency Mode: Not Supported 00:17:16.014 Traffic Based Keep ALive: Not Supported 00:17:16.014 Namespace Granularity: Not Supported 00:17:16.014 SQ Associations: Not Supported 00:17:16.014 UUID List: Not Supported 00:17:16.014 Multi-Domain Subsystem: Not Supported 00:17:16.014 Fixed Capacity Management: Not Supported 00:17:16.014 Variable Capacity Management: Not Supported 00:17:16.014 Delete Endurance Group: Not Supported 00:17:16.014 Delete NVM Set: Not Supported 00:17:16.014 Extended LBA Formats Supported: Not Supported 00:17:16.014 Flexible Data Placement Supported: Not Supported 00:17:16.014 00:17:16.014 Controller Memory Buffer Support 00:17:16.014 ================================ 00:17:16.014 Supported: No 00:17:16.014 00:17:16.014 Persistent Memory Region Support 00:17:16.014 ================================ 00:17:16.014 Supported: No 00:17:16.014 00:17:16.014 Admin Command Set Attributes 00:17:16.014 ============================ 00:17:16.014 Security Send/Receive: Not Supported 00:17:16.014 Format NVM: Not Supported 00:17:16.014 Firmware Activate/Download: Not Supported 00:17:16.014 Namespace Management: Not Supported 00:17:16.014 Device Self-Test: Not Supported 00:17:16.014 Directives: Not Supported 00:17:16.014 NVMe-MI: Not Supported 00:17:16.014 Virtualization Management: Not Supported 00:17:16.014 Doorbell Buffer Config: Not Supported 00:17:16.014 Get LBA Status Capability: Not Supported 00:17:16.014 Command & Feature Lockdown Capability: Not Supported 00:17:16.014 Abort Command Limit: 4 00:17:16.014 Async Event Request Limit: 4 00:17:16.014 Number of Firmware Slots: N/A 00:17:16.014 Firmware Slot 1 Read-Only: N/A 00:17:16.014 Firmware Activation Without Reset: N/A 00:17:16.014 Multiple Update Detection Support: N/A 00:17:16.014 Firmware Update Granularity: No Information Provided 00:17:16.014 Per-Namespace SMART Log: No 00:17:16.014 Asymmetric Namespace Access Log Page: Not Supported 00:17:16.014 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:16.014 Command Effects Log Page: Supported 00:17:16.014 Get Log Page Extended Data: Supported 00:17:16.014 Telemetry Log Pages: Not Supported 00:17:16.014 Persistent Event Log Pages: Not Supported 00:17:16.014 Supported Log Pages Log Page: May Support 00:17:16.014 Commands Supported & Effects Log Page: Not Supported 00:17:16.014 Feature Identifiers & Effects Log Page:May Support 00:17:16.014 NVMe-MI Commands & Effects Log Page: May Support 00:17:16.014 Data Area 4 for Telemetry Log: Not Supported 00:17:16.014 Error Log Page Entries Supported: 128 00:17:16.014 Keep Alive: Supported 00:17:16.014 Keep Alive Granularity: 10000 ms 00:17:16.014 00:17:16.014 NVM Command Set Attributes 00:17:16.014 ========================== 00:17:16.014 Submission Queue Entry Size 00:17:16.014 Max: 64 00:17:16.014 Min: 64 00:17:16.014 Completion Queue Entry Size 00:17:16.014 Max: 16 00:17:16.014 Min: 16 00:17:16.014 Number of Namespaces: 32 00:17:16.014 Compare Command: Supported 00:17:16.014 Write Uncorrectable Command: Not Supported 00:17:16.014 Dataset Management Command: Supported 00:17:16.014 Write Zeroes Command: Supported 00:17:16.014 Set Features Save Field: Not Supported 00:17:16.014 Reservations: Not Supported 00:17:16.014 Timestamp: Not Supported 00:17:16.014 Copy: Supported 00:17:16.014 Volatile Write Cache: Present 00:17:16.014 Atomic Write Unit (Normal): 1 00:17:16.014 Atomic Write Unit (PFail): 1 00:17:16.014 Atomic Compare & Write Unit: 1 00:17:16.014 Fused Compare & Write: Supported 00:17:16.014 Scatter-Gather List 00:17:16.014 SGL Command Set: Supported (Dword aligned) 00:17:16.014 SGL Keyed: Not Supported 00:17:16.014 SGL Bit Bucket Descriptor: Not Supported 00:17:16.014 SGL Metadata Pointer: Not Supported 00:17:16.014 Oversized SGL: Not Supported 00:17:16.014 SGL Metadata Address: Not Supported 00:17:16.014 SGL Offset: Not Supported 00:17:16.014 Transport SGL Data Block: Not Supported 00:17:16.014 Replay Protected Memory Block: Not Supported 00:17:16.014 00:17:16.014 Firmware Slot Information 00:17:16.014 ========================= 00:17:16.014 Active slot: 1 00:17:16.014 Slot 1 Firmware Revision: 25.01 00:17:16.014 00:17:16.014 00:17:16.014 Commands Supported and Effects 00:17:16.014 ============================== 00:17:16.014 Admin Commands 00:17:16.014 -------------- 00:17:16.014 Get Log Page (02h): Supported 00:17:16.014 Identify (06h): Supported 00:17:16.014 Abort (08h): Supported 00:17:16.014 Set Features (09h): Supported 00:17:16.014 Get Features (0Ah): Supported 00:17:16.014 Asynchronous Event Request (0Ch): Supported 00:17:16.014 Keep Alive (18h): Supported 00:17:16.014 I/O Commands 00:17:16.014 ------------ 00:17:16.015 Flush (00h): Supported LBA-Change 00:17:16.015 Write (01h): Supported LBA-Change 00:17:16.015 Read (02h): Supported 00:17:16.015 Compare (05h): Supported 00:17:16.015 Write Zeroes (08h): Supported LBA-Change 00:17:16.015 Dataset Management (09h): Supported LBA-Change 00:17:16.015 Copy (19h): Supported LBA-Change 00:17:16.015 00:17:16.015 Error Log 00:17:16.015 ========= 00:17:16.015 00:17:16.015 Arbitration 00:17:16.015 =========== 00:17:16.015 Arbitration Burst: 1 00:17:16.015 00:17:16.015 Power Management 00:17:16.015 ================ 00:17:16.015 Number of Power States: 1 00:17:16.015 Current Power State: Power State #0 00:17:16.015 Power State #0: 00:17:16.015 Max Power: 0.00 W 00:17:16.015 Non-Operational State: Operational 00:17:16.015 Entry Latency: Not Reported 00:17:16.015 Exit Latency: Not Reported 00:17:16.015 Relative Read Throughput: 0 00:17:16.015 Relative Read Latency: 0 00:17:16.015 Relative Write Throughput: 0 00:17:16.015 Relative Write Latency: 0 00:17:16.015 Idle Power: Not Reported 00:17:16.015 Active Power: Not Reported 00:17:16.015 Non-Operational Permissive Mode: Not Supported 00:17:16.015 00:17:16.015 Health Information 00:17:16.015 ================== 00:17:16.015 Critical Warnings: 00:17:16.015 Available Spare Space: OK 00:17:16.015 Temperature: OK 00:17:16.015 Device Reliability: OK 00:17:16.015 Read Only: No 00:17:16.015 Volatile Memory Backup: OK 00:17:16.015 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:16.015 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:16.015 Available Spare: 0% 00:17:16.015 Available Sp[2024-10-28 04:53:06.602623] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:16.015 [2024-10-28 04:53:06.602651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:16.015 [2024-10-28 04:53:06.602701] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:16.015 [2024-10-28 04:53:06.602718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.015 [2024-10-28 04:53:06.602729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.015 [2024-10-28 04:53:06.602739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.015 [2024-10-28 04:53:06.602749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.272 [2024-10-28 04:53:06.605649] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:16.272 [2024-10-28 04:53:06.605671] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:16.272 [2024-10-28 04:53:06.606184] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:16.273 [2024-10-28 04:53:06.606260] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:16.273 [2024-10-28 04:53:06.606279] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:16.273 [2024-10-28 04:53:06.607192] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:16.273 [2024-10-28 04:53:06.607214] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:16.273 [2024-10-28 04:53:06.607266] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:16.273 [2024-10-28 04:53:06.609231] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:16.273 are Threshold: 0% 00:17:16.273 Life Percentage Used: 0% 00:17:16.273 Data Units Read: 0 00:17:16.273 Data Units Written: 0 00:17:16.273 Host Read Commands: 0 00:17:16.273 Host Write Commands: 0 00:17:16.273 Controller Busy Time: 0 minutes 00:17:16.273 Power Cycles: 0 00:17:16.273 Power On Hours: 0 hours 00:17:16.273 Unsafe Shutdowns: 0 00:17:16.273 Unrecoverable Media Errors: 0 00:17:16.273 Lifetime Error Log Entries: 0 00:17:16.273 Warning Temperature Time: 0 minutes 00:17:16.273 Critical Temperature Time: 0 minutes 00:17:16.273 00:17:16.273 Number of Queues 00:17:16.273 ================ 00:17:16.273 Number of I/O Submission Queues: 127 00:17:16.273 Number of I/O Completion Queues: 127 00:17:16.273 00:17:16.273 Active Namespaces 00:17:16.273 ================= 00:17:16.273 Namespace ID:1 00:17:16.273 Error Recovery Timeout: Unlimited 00:17:16.273 Command Set Identifier: NVM (00h) 00:17:16.273 Deallocate: Supported 00:17:16.273 Deallocated/Unwritten Error: Not Supported 00:17:16.273 Deallocated Read Value: Unknown 00:17:16.273 Deallocate in Write Zeroes: Not Supported 00:17:16.273 Deallocated Guard Field: 0xFFFF 00:17:16.273 Flush: Supported 00:17:16.273 Reservation: Supported 00:17:16.273 Namespace Sharing Capabilities: Multiple Controllers 00:17:16.273 Size (in LBAs): 131072 (0GiB) 00:17:16.273 Capacity (in LBAs): 131072 (0GiB) 00:17:16.273 Utilization (in LBAs): 131072 (0GiB) 00:17:16.273 NGUID: 62D8BED6AA63422F987AC4FF9A441846 00:17:16.273 UUID: 62d8bed6-aa63-422f-987a-c4ff9a441846 00:17:16.273 Thin Provisioning: Not Supported 00:17:16.273 Per-NS Atomic Units: Yes 00:17:16.273 Atomic Boundary Size (Normal): 0 00:17:16.273 Atomic Boundary Size (PFail): 0 00:17:16.273 Atomic Boundary Offset: 0 00:17:16.273 Maximum Single Source Range Length: 65535 00:17:16.273 Maximum Copy Length: 65535 00:17:16.273 Maximum Source Range Count: 1 00:17:16.273 NGUID/EUI64 Never Reused: No 00:17:16.273 Namespace Write Protected: No 00:17:16.273 Number of LBA Formats: 1 00:17:16.273 Current LBA Format: LBA Format #00 00:17:16.273 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:16.273 00:17:16.273 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:16.530 [2024-10-28 04:53:06.950083] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:21.793 Initializing NVMe Controllers 00:17:21.793 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:21.793 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:21.793 Initialization complete. Launching workers. 00:17:21.793 ======================================================== 00:17:21.793 Latency(us) 00:17:21.793 Device Information : IOPS MiB/s Average min max 00:17:21.793 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32026.20 125.10 3996.48 1181.27 9076.01 00:17:21.793 ======================================================== 00:17:21.793 Total : 32026.20 125.10 3996.48 1181.27 9076.01 00:17:21.793 00:17:21.793 [2024-10-28 04:53:11.959884] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:21.793 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:21.793 [2024-10-28 04:53:12.317549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:27.055 Initializing NVMe Controllers 00:17:27.055 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:27.055 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:27.055 Initialization complete. Launching workers. 00:17:27.055 ======================================================== 00:17:27.055 Latency(us) 00:17:27.055 Device Information : IOPS MiB/s Average min max 00:17:27.055 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15977.10 62.41 8016.68 6976.59 16025.13 00:17:27.055 ======================================================== 00:17:27.055 Total : 15977.10 62.41 8016.68 6976.59 16025.13 00:17:27.055 00:17:27.055 [2024-10-28 04:53:17.347559] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:27.055 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:27.312 [2024-10-28 04:53:17.667168] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:32.573 [2024-10-28 04:53:22.711920] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:32.573 Initializing NVMe Controllers 00:17:32.573 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:32.573 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:32.573 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:32.573 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:32.573 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:32.573 Initialization complete. Launching workers. 00:17:32.573 Starting thread on core 2 00:17:32.573 Starting thread on core 3 00:17:32.573 Starting thread on core 1 00:17:32.573 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:32.573 [2024-10-28 04:53:23.139039] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:35.856 [2024-10-28 04:53:26.207800] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:35.856 Initializing NVMe Controllers 00:17:35.856 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:35.856 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:35.856 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:35.856 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:35.856 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:35.856 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:35.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:35.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:35.856 Initialization complete. Launching workers. 00:17:35.856 Starting thread on core 1 with urgent priority queue 00:17:35.856 Starting thread on core 2 with urgent priority queue 00:17:35.856 Starting thread on core 3 with urgent priority queue 00:17:35.856 Starting thread on core 0 with urgent priority queue 00:17:35.856 SPDK bdev Controller (SPDK1 ) core 0: 2712.00 IO/s 36.87 secs/100000 ios 00:17:35.856 SPDK bdev Controller (SPDK1 ) core 1: 3566.00 IO/s 28.04 secs/100000 ios 00:17:35.856 SPDK bdev Controller (SPDK1 ) core 2: 3731.67 IO/s 26.80 secs/100000 ios 00:17:35.856 SPDK bdev Controller (SPDK1 ) core 3: 3690.67 IO/s 27.10 secs/100000 ios 00:17:35.856 ======================================================== 00:17:35.856 00:17:35.856 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:36.113 [2024-10-28 04:53:26.620019] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:36.113 Initializing NVMe Controllers 00:17:36.113 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:36.113 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:36.113 Namespace ID: 1 size: 0GB 00:17:36.113 Initialization complete. 00:17:36.113 INFO: using host memory buffer for IO 00:17:36.113 Hello world! 00:17:36.113 [2024-10-28 04:53:26.653557] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:36.113 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:36.678 [2024-10-28 04:53:27.068013] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:37.612 Initializing NVMe Controllers 00:17:37.612 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:37.612 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:37.612 Initialization complete. Launching workers. 00:17:37.612 submit (in ns) avg, min, max = 7310.4, 3497.3, 4026328.6 00:17:37.612 complete (in ns) avg, min, max = 25845.8, 2085.0, 5009082.2 00:17:37.612 00:17:37.612 Submit histogram 00:17:37.612 ================ 00:17:37.612 Range in us Cumulative Count 00:17:37.612 3.493 - 3.517: 0.1529% ( 20) 00:17:37.612 3.517 - 3.540: 0.5276% ( 49) 00:17:37.612 3.540 - 3.564: 1.7585% ( 161) 00:17:37.612 3.564 - 3.588: 4.8398% ( 403) 00:17:37.612 3.588 - 3.612: 10.7577% ( 774) 00:17:37.612 3.612 - 3.635: 18.2124% ( 975) 00:17:37.612 3.635 - 3.659: 28.6260% ( 1362) 00:17:37.612 3.659 - 3.683: 37.7170% ( 1189) 00:17:37.612 3.683 - 3.707: 46.0280% ( 1087) 00:17:37.612 3.707 - 3.730: 53.1998% ( 938) 00:17:37.612 3.730 - 3.754: 58.3760% ( 677) 00:17:37.612 3.754 - 3.778: 63.6134% ( 685) 00:17:37.612 3.778 - 3.802: 67.4134% ( 497) 00:17:37.612 3.802 - 3.826: 71.1522% ( 489) 00:17:37.612 3.826 - 3.849: 74.3329% ( 416) 00:17:37.612 3.849 - 3.873: 78.1635% ( 501) 00:17:37.612 3.873 - 3.897: 81.4971% ( 436) 00:17:37.612 3.897 - 3.921: 84.4254% ( 383) 00:17:37.612 3.921 - 3.944: 87.1473% ( 356) 00:17:37.612 3.944 - 3.968: 89.0664% ( 251) 00:17:37.612 3.968 - 3.992: 90.7715% ( 223) 00:17:37.612 3.992 - 4.016: 92.5453% ( 232) 00:17:37.612 4.016 - 4.039: 93.8145% ( 166) 00:17:37.612 4.039 - 4.063: 94.9155% ( 144) 00:17:37.612 4.063 - 4.087: 95.7260% ( 106) 00:17:37.612 4.087 - 4.111: 96.1083% ( 50) 00:17:37.612 4.111 - 4.134: 96.3759% ( 35) 00:17:37.612 4.134 - 4.158: 96.5976% ( 29) 00:17:37.612 4.158 - 4.182: 96.7352% ( 18) 00:17:37.612 4.182 - 4.206: 96.8576% ( 16) 00:17:37.612 4.206 - 4.229: 96.9417% ( 11) 00:17:37.612 4.229 - 4.253: 97.0181% ( 10) 00:17:37.612 4.253 - 4.277: 97.0793% ( 8) 00:17:37.612 4.277 - 4.301: 97.1634% ( 11) 00:17:37.612 4.301 - 4.324: 97.2322% ( 9) 00:17:37.612 4.324 - 4.348: 97.2628% ( 4) 00:17:37.612 4.348 - 4.372: 97.2934% ( 4) 00:17:37.612 4.372 - 4.396: 97.3087% ( 2) 00:17:37.612 4.420 - 4.443: 97.3240% ( 2) 00:17:37.612 4.467 - 4.491: 97.3545% ( 4) 00:17:37.612 4.491 - 4.515: 97.3698% ( 2) 00:17:37.612 4.586 - 4.610: 97.3775% ( 1) 00:17:37.612 4.633 - 4.657: 97.3928% ( 2) 00:17:37.612 4.657 - 4.681: 97.4004% ( 1) 00:17:37.612 4.681 - 4.705: 97.4234% ( 3) 00:17:37.612 4.705 - 4.728: 97.4386% ( 2) 00:17:37.612 4.728 - 4.752: 97.4616% ( 3) 00:17:37.612 4.752 - 4.776: 97.5227% ( 8) 00:17:37.612 4.776 - 4.800: 97.5916% ( 9) 00:17:37.612 4.800 - 4.823: 97.6604% ( 9) 00:17:37.612 4.823 - 4.847: 97.6986% ( 5) 00:17:37.612 4.847 - 4.871: 97.7827% ( 11) 00:17:37.612 4.871 - 4.895: 97.8362% ( 7) 00:17:37.612 4.895 - 4.919: 97.9203% ( 11) 00:17:37.612 4.919 - 4.942: 97.9662% ( 6) 00:17:37.612 4.942 - 4.966: 97.9815% ( 2) 00:17:37.612 4.966 - 4.990: 98.0197% ( 5) 00:17:37.612 4.990 - 5.014: 98.0580% ( 5) 00:17:37.612 5.014 - 5.037: 98.0732% ( 2) 00:17:37.612 5.037 - 5.061: 98.0885% ( 2) 00:17:37.612 5.061 - 5.085: 98.0962% ( 1) 00:17:37.612 5.085 - 5.109: 98.1421% ( 6) 00:17:37.612 5.109 - 5.132: 98.1726% ( 4) 00:17:37.612 5.132 - 5.156: 98.1803% ( 1) 00:17:37.612 5.156 - 5.180: 98.1956% ( 2) 00:17:37.612 5.180 - 5.204: 98.2032% ( 1) 00:17:37.612 5.204 - 5.227: 98.2109% ( 1) 00:17:37.612 5.275 - 5.299: 98.2185% ( 1) 00:17:37.612 5.370 - 5.394: 98.2262% ( 1) 00:17:37.612 5.394 - 5.417: 98.2338% ( 1) 00:17:37.612 5.417 - 5.441: 98.2415% ( 1) 00:17:37.612 5.513 - 5.536: 98.2567% ( 2) 00:17:37.612 5.584 - 5.608: 98.2644% ( 1) 00:17:37.612 5.631 - 5.655: 98.2720% ( 1) 00:17:37.612 6.130 - 6.178: 98.2797% ( 1) 00:17:37.612 6.273 - 6.320: 98.2950% ( 2) 00:17:37.612 6.320 - 6.368: 98.3026% ( 1) 00:17:37.612 6.463 - 6.510: 98.3103% ( 1) 00:17:37.612 6.510 - 6.558: 98.3256% ( 2) 00:17:37.612 6.653 - 6.701: 98.3332% ( 1) 00:17:37.612 6.796 - 6.843: 98.3409% ( 1) 00:17:37.612 6.938 - 6.986: 98.3561% ( 2) 00:17:37.612 6.986 - 7.033: 98.3714% ( 2) 00:17:37.612 7.033 - 7.081: 98.3791% ( 1) 00:17:37.612 7.176 - 7.223: 98.3867% ( 1) 00:17:37.612 7.223 - 7.271: 98.3944% ( 1) 00:17:37.612 7.366 - 7.413: 98.4020% ( 1) 00:17:37.612 7.413 - 7.461: 98.4097% ( 1) 00:17:37.612 7.461 - 7.508: 98.4250% ( 2) 00:17:37.612 7.508 - 7.556: 98.4479% ( 3) 00:17:37.612 7.603 - 7.651: 98.4555% ( 1) 00:17:37.612 7.651 - 7.699: 98.4632% ( 1) 00:17:37.612 7.699 - 7.746: 98.4785% ( 2) 00:17:37.612 7.746 - 7.794: 98.4861% ( 1) 00:17:37.612 7.794 - 7.841: 98.5091% ( 3) 00:17:37.612 7.841 - 7.889: 98.5396% ( 4) 00:17:37.612 7.889 - 7.936: 98.5626% ( 3) 00:17:37.613 7.936 - 7.984: 98.5779% ( 2) 00:17:37.613 7.984 - 8.031: 98.5855% ( 1) 00:17:37.613 8.031 - 8.079: 98.6008% ( 2) 00:17:37.613 8.079 - 8.126: 98.6085% ( 1) 00:17:37.613 8.126 - 8.174: 98.6390% ( 4) 00:17:37.613 8.174 - 8.221: 98.6696% ( 4) 00:17:37.613 8.221 - 8.269: 98.6773% ( 1) 00:17:37.613 8.316 - 8.364: 98.6849% ( 1) 00:17:37.613 8.364 - 8.411: 98.6926% ( 1) 00:17:37.613 8.459 - 8.506: 98.7002% ( 1) 00:17:37.613 8.554 - 8.601: 98.7079% ( 1) 00:17:37.613 8.601 - 8.649: 98.7155% ( 1) 00:17:37.613 8.696 - 8.744: 98.7308% ( 2) 00:17:37.613 8.839 - 8.887: 98.7384% ( 1) 00:17:37.613 9.029 - 9.077: 98.7461% ( 1) 00:17:37.613 9.457 - 9.504: 98.7537% ( 1) 00:17:37.613 9.599 - 9.647: 98.7690% ( 2) 00:17:37.613 10.027 - 10.075: 98.7767% ( 1) 00:17:37.613 10.217 - 10.265: 98.7843% ( 1) 00:17:37.613 10.360 - 10.407: 98.7920% ( 1) 00:17:37.613 10.455 - 10.502: 98.7996% ( 1) 00:17:37.613 10.597 - 10.645: 98.8072% ( 1) 00:17:37.613 10.692 - 10.740: 98.8149% ( 1) 00:17:37.613 10.882 - 10.930: 98.8225% ( 1) 00:17:37.613 10.930 - 10.978: 98.8302% ( 1) 00:17:37.613 11.073 - 11.120: 98.8378% ( 1) 00:17:37.613 11.215 - 11.263: 98.8455% ( 1) 00:17:37.613 11.500 - 11.548: 98.8531% ( 1) 00:17:37.613 11.548 - 11.595: 98.8608% ( 1) 00:17:37.613 11.738 - 11.785: 98.8684% ( 1) 00:17:37.613 12.023 - 12.071: 98.8761% ( 1) 00:17:37.613 12.166 - 12.261: 98.8914% ( 2) 00:17:37.613 12.451 - 12.546: 98.8990% ( 1) 00:17:37.613 12.831 - 12.926: 98.9066% ( 1) 00:17:37.613 13.021 - 13.116: 98.9143% ( 1) 00:17:37.613 13.686 - 13.781: 98.9296% ( 2) 00:17:37.613 13.781 - 13.876: 98.9372% ( 1) 00:17:37.613 14.066 - 14.161: 98.9525% ( 2) 00:17:37.613 14.257 - 14.352: 98.9602% ( 1) 00:17:37.613 14.352 - 14.447: 98.9678% ( 1) 00:17:37.613 14.542 - 14.637: 98.9755% ( 1) 00:17:37.613 14.827 - 14.922: 98.9831% ( 1) 00:17:37.613 17.203 - 17.298: 98.9907% ( 1) 00:17:37.613 17.298 - 17.393: 99.0290% ( 5) 00:17:37.613 17.393 - 17.488: 99.0443% ( 2) 00:17:37.613 17.488 - 17.583: 99.0749% ( 4) 00:17:37.613 17.583 - 17.678: 99.1207% ( 6) 00:17:37.613 17.678 - 17.773: 99.1895% ( 9) 00:17:37.613 17.773 - 17.868: 99.2354% ( 6) 00:17:37.613 17.868 - 17.963: 99.2736% ( 5) 00:17:37.613 17.963 - 18.058: 99.3425% ( 9) 00:17:37.613 18.058 - 18.153: 99.3807% ( 5) 00:17:37.613 18.153 - 18.248: 99.5030% ( 16) 00:17:37.613 18.248 - 18.343: 99.5489% ( 6) 00:17:37.613 18.343 - 18.438: 99.6254% ( 10) 00:17:37.613 18.438 - 18.534: 99.6789% ( 7) 00:17:37.613 18.534 - 18.629: 99.6942% ( 2) 00:17:37.613 18.629 - 18.724: 99.7400% ( 6) 00:17:37.613 18.724 - 18.819: 99.7706% ( 4) 00:17:37.613 18.819 - 18.914: 99.7936% ( 3) 00:17:37.613 18.914 - 19.009: 99.8318% ( 5) 00:17:37.613 19.009 - 19.104: 99.8394% ( 1) 00:17:37.613 19.389 - 19.484: 99.8547% ( 2) 00:17:37.613 19.484 - 19.579: 99.8700% ( 2) 00:17:37.613 19.579 - 19.674: 99.8777% ( 1) 00:17:37.613 19.864 - 19.959: 99.8853% ( 1) 00:17:37.613 20.149 - 20.244: 99.8930% ( 1) 00:17:37.613 22.810 - 22.906: 99.9006% ( 1) 00:17:37.613 27.373 - 27.563: 99.9082% ( 1) 00:17:37.613 31.364 - 31.554: 99.9159% ( 1) 00:17:37.613 3990.311 - 4014.643: 99.9847% ( 9) 00:17:37.613 4014.643 - 4038.974: 100.0000% ( 2) 00:17:37.613 00:17:37.613 Complete histogram 00:17:37.613 ================== 00:17:37.613 Range in us Cumulative Count 00:17:37.613 2.079 - 2.091: 1.6591% ( 217) 00:17:37.613 2.091 - 2.103: 19.0840% ( 2279) 00:17:37.613 2.103 - 2.115: 27.1122% ( 1050) 00:17:37.613 2.115 - 2.127: 37.5029% ( 1359) 00:17:37.613 2.127 - 2.138: 57.5579% ( 2623) 00:17:37.613 2.138 - 2.150: 61.2432% ( 482) 00:17:37.613 2.150 - 2.162: 65.0661% ( 500) 00:17:37.613 2.162 - 2.174: 72.9719% ( 1034) 00:17:37.613 2.174 - 2.186: 75.5257% ( 334) 00:17:37.613 2.186 - 2.198: 80.6866% ( 675) 00:17:37.613 2.198 - 2.210: 87.4608% ( 886) 00:17:37.613 2.210 - 2.222: 88.8065% ( 176) 00:17:37.613 2.222 - 2.234: 89.8004% ( 130) 00:17:37.613 2.234 - 2.245: 90.9932% ( 156) 00:17:37.613 2.245 - 2.257: 92.5682% ( 206) 00:17:37.613 2.257 - 2.269: 94.0668% ( 196) 00:17:37.613 2.269 - 2.281: 94.8238% ( 99) 00:17:37.613 2.281 - 2.293: 95.1143% ( 38) 00:17:37.613 2.293 - 2.305: 95.3590% ( 32) 00:17:37.613 2.305 - 2.317: 95.5272% ( 22) 00:17:37.613 2.317 - 2.329: 95.8254% ( 39) 00:17:37.613 2.329 - 2.340: 96.0777% ( 33) 00:17:37.613 2.340 - 2.352: 96.1159% ( 5) 00:17:37.613 2.352 - 2.364: 96.1694% ( 7) 00:17:37.613 2.364 - 2.376: 96.2382% ( 9) 00:17:37.613 2.376 - 2.388: 96.4217% ( 24) 00:17:37.613 2.388 - 2.400: 96.7887% ( 48) 00:17:37.613 2.400 - 2.412: 97.1405% ( 46) 00:17:37.613 2.412 - 2.424: 97.3851% ( 32) 00:17:37.613 2.424 - 2.435: 97.6833% ( 39) 00:17:37.613 2.435 - 2.447: 97.8515% ( 22) 00:17:37.613 2.447 - 2.459: 97.9815% ( 17) 00:17:37.613 2.459 - 2.471: 98.1115% ( 17) 00:17:37.613 2.471 - 2.483: 98.1956% ( 11) 00:17:37.613 2.483 - 2.495: 98.2644% ( 9) 00:17:37.613 2.495 - 2.507: 98.2873% ( 3) 00:17:37.613 2.507 - 2.519: 98.3409% ( 7) 00:17:37.613 2.519 - 2.531: 98.3791% ( 5) 00:17:37.613 2.531 - 2.542: 98.3867% ( 1) 00:17:37.613 2.542 - 2.554: 98.4020% ( 2) 00:17:37.613 2.566 - 2.578: 98.4097% ( 1) 00:17:37.613 2.578 - 2.590: 98.4173% ( 1) 00:17:37.613 2.590 - 2.602: 98.4250% ( 1) 00:17:37.613 2.637 - 2.649: 98.4326% ( 1) 00:17:37.613 2.649 - 2.661: 98.4402% ( 1) 00:17:37.613 2.661 - 2.673: 98.4555% ( 2) 00:17:37.613 2.673 - 2.685: 98.4632% ( 1) 00:17:37.613 2.744 - 2.756: 98.4785% ( 2) 00:17:37.613 2.792 - 2.804: 98.4861% ( 1) 00:17:37.613 2.828 - 2.839: 98.4938% ( 1) 00:17:37.613 2.934 - 2.946: 98.5014% ( 1) 00:17:37.613 3.208 - 3.231: 98.5167% ( 2) 00:17:37.613 3.255 - 3.279: 98.5244% ( 1) 00:17:37.613 3.303 - 3.327: 98.5396% ( 2) 00:17:37.613 3.327 - 3.350: 98.5549% ( 2) 00:17:37.613 3.350 - 3.374: 98.5702% ( 2) 00:17:37.613 3.374 - 3.398: 98.5779% ( 1) 00:17:37.613 3.398 - 3.422: 98.5932% ( 2) 00:17:37.613 3.422 - 3.445: 98.6161% ( 3) 00:17:37.613 3.445 - 3.469: 98.6237% ( 1) 00:17:37.613 3.469 - 3.493: 98.6314% ( 1) 00:17:37.613 3.493 - 3.517: 98.6390% ( 1) 00:17:37.613 3.564 - 3.588: 98.6467% ( 1) 00:17:37.613 3.612 - 3.635: 98.6543% ( 1) 00:17:37.613 3.635 - 3.659: 98.6620% ( 1) 00:17:37.613 3.659 - 3.683: 98.6773% ( 2) 00:17:37.613 3.683 - 3.707: 98.6849% ( 1) 00:17:37.613 3.730 - 3.754: 98.7002% ( 2) 00:17:37.613 3.754 - 3.778: 98.7079% ( 1) 00:17:37.613 3.778 - 3.802: 98.7308% ( 3) 00:17:37.613 3.873 - 3.897: 98.7461% ( 2) 00:17:37.613 3.897 - 3.921: 98.7537% ( 1) 00:17:37.613 3.921 - 3.944: 98.7614% ( 1) 00:17:37.613 3.992 - 4.016: 98.7690% ( 1) 00:17:37.613 5.251 - 5.275: 98.7767% ( 1) 00:17:37.613 5.798 - 5.821: 98.7843% ( 1) 00:17:37.613 5.916 - 5.940: 98.7920% ( 1) 00:17:37.613 5.964 - 5.988: 98.8072% ( 2) 00:17:37.614 6.178 - 6.225: 98.8225% ( 2) 00:17:37.614 6.415 - 6.463: 98.8302% ( 1) 00:17:37.614 6.463 - 6.510: 98.8455% ( 2) 00:17:37.614 6.558 - 6.606: 98.8531% ( 1) 00:17:37.614 6.653 - 6.701: 9[2024-10-28 04:53:28.084698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:37.614 8.8608% ( 1) 00:17:37.614 6.843 - 6.891: 98.8684% ( 1) 00:17:37.614 6.938 - 6.986: 98.8761% ( 1) 00:17:37.614 7.033 - 7.081: 98.8837% ( 1) 00:17:37.614 7.176 - 7.223: 98.8914% ( 1) 00:17:37.614 7.271 - 7.318: 98.8990% ( 1) 00:17:37.614 7.318 - 7.366: 98.9066% ( 1) 00:17:37.614 7.413 - 7.461: 98.9143% ( 1) 00:17:37.614 7.936 - 7.984: 98.9219% ( 1) 00:17:37.614 15.682 - 15.777: 98.9372% ( 2) 00:17:37.614 15.777 - 15.872: 98.9449% ( 1) 00:17:37.614 15.872 - 15.967: 98.9602% ( 2) 00:17:37.614 15.967 - 16.062: 98.9984% ( 5) 00:17:37.614 16.062 - 16.157: 99.0137% ( 2) 00:17:37.614 16.157 - 16.252: 99.0978% ( 11) 00:17:37.614 16.348 - 16.443: 99.1284% ( 4) 00:17:37.614 16.443 - 16.538: 99.1666% ( 5) 00:17:37.614 16.538 - 16.633: 99.1972% ( 4) 00:17:37.614 16.633 - 16.728: 99.2660% ( 9) 00:17:37.614 16.728 - 16.823: 99.2736% ( 1) 00:17:37.614 16.823 - 16.918: 99.2966% ( 3) 00:17:37.614 16.918 - 17.013: 99.3119% ( 2) 00:17:37.614 17.013 - 17.108: 99.3348% ( 3) 00:17:37.614 17.203 - 17.298: 99.3501% ( 2) 00:17:37.614 17.298 - 17.393: 99.3730% ( 3) 00:17:37.614 17.393 - 17.488: 99.3960% ( 3) 00:17:37.614 18.153 - 18.248: 99.4036% ( 1) 00:17:37.614 18.248 - 18.343: 99.4113% ( 1) 00:17:37.614 3163.052 - 3187.383: 99.4189% ( 1) 00:17:37.614 3990.311 - 4014.643: 99.9312% ( 67) 00:17:37.614 4014.643 - 4038.974: 99.9924% ( 8) 00:17:37.614 4987.889 - 5012.221: 100.0000% ( 1) 00:17:37.614 00:17:37.614 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:37.614 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:37.614 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:37.614 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:37.614 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:37.872 [ 00:17:37.872 { 00:17:37.872 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:37.872 "subtype": "Discovery", 00:17:37.872 "listen_addresses": [], 00:17:37.872 "allow_any_host": true, 00:17:37.872 "hosts": [] 00:17:37.872 }, 00:17:37.872 { 00:17:37.872 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:37.872 "subtype": "NVMe", 00:17:37.872 "listen_addresses": [ 00:17:37.872 { 00:17:37.872 "trtype": "VFIOUSER", 00:17:37.872 "adrfam": "IPv4", 00:17:37.872 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:37.872 "trsvcid": "0" 00:17:37.872 } 00:17:37.872 ], 00:17:37.872 "allow_any_host": true, 00:17:37.872 "hosts": [], 00:17:37.872 "serial_number": "SPDK1", 00:17:37.872 "model_number": "SPDK bdev Controller", 00:17:37.872 "max_namespaces": 32, 00:17:37.872 "min_cntlid": 1, 00:17:37.872 "max_cntlid": 65519, 00:17:37.872 "namespaces": [ 00:17:37.872 { 00:17:37.872 "nsid": 1, 00:17:37.872 "bdev_name": "Malloc1", 00:17:37.872 "name": "Malloc1", 00:17:37.872 "nguid": "62D8BED6AA63422F987AC4FF9A441846", 00:17:37.872 "uuid": "62d8bed6-aa63-422f-987a-c4ff9a441846" 00:17:37.872 } 00:17:37.872 ] 00:17:37.872 }, 00:17:37.873 { 00:17:37.873 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:37.873 "subtype": "NVMe", 00:17:37.873 "listen_addresses": [ 00:17:37.873 { 00:17:37.873 "trtype": "VFIOUSER", 00:17:37.873 "adrfam": "IPv4", 00:17:37.873 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:37.873 "trsvcid": "0" 00:17:37.873 } 00:17:37.873 ], 00:17:37.873 "allow_any_host": true, 00:17:37.873 "hosts": [], 00:17:37.873 "serial_number": "SPDK2", 00:17:37.873 "model_number": "SPDK bdev Controller", 00:17:37.873 "max_namespaces": 32, 00:17:37.873 "min_cntlid": 1, 00:17:37.873 "max_cntlid": 65519, 00:17:37.873 "namespaces": [ 00:17:37.873 { 00:17:37.873 "nsid": 1, 00:17:37.873 "bdev_name": "Malloc2", 00:17:37.873 "name": "Malloc2", 00:17:37.873 "nguid": "EAB5EFD41303404F8DA9D8BDF789F3B9", 00:17:37.873 "uuid": "eab5efd4-1303-404f-8da9-d8bdf789f3b9" 00:17:37.873 } 00:17:37.873 ] 00:17:37.873 } 00:17:37.873 ] 00:17:37.873 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:37.873 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2308570 00:17:37.873 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:37.873 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:37.873 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:37.873 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:37.873 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:37.873 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:37.873 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:37.873 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:38.132 [2024-10-28 04:53:28.695025] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:38.132 Malloc3 00:17:38.389 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:38.647 [2024-10-28 04:53:29.009525] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:38.647 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:38.647 Asynchronous Event Request test 00:17:38.647 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:38.647 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:38.647 Registering asynchronous event callbacks... 00:17:38.647 Starting namespace attribute notice tests for all controllers... 00:17:38.647 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:38.647 aer_cb - Changed Namespace 00:17:38.647 Cleaning up... 00:17:38.905 [ 00:17:38.905 { 00:17:38.905 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:38.905 "subtype": "Discovery", 00:17:38.905 "listen_addresses": [], 00:17:38.905 "allow_any_host": true, 00:17:38.905 "hosts": [] 00:17:38.905 }, 00:17:38.905 { 00:17:38.905 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:38.905 "subtype": "NVMe", 00:17:38.905 "listen_addresses": [ 00:17:38.905 { 00:17:38.905 "trtype": "VFIOUSER", 00:17:38.905 "adrfam": "IPv4", 00:17:38.905 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:38.905 "trsvcid": "0" 00:17:38.905 } 00:17:38.905 ], 00:17:38.905 "allow_any_host": true, 00:17:38.905 "hosts": [], 00:17:38.905 "serial_number": "SPDK1", 00:17:38.905 "model_number": "SPDK bdev Controller", 00:17:38.905 "max_namespaces": 32, 00:17:38.905 "min_cntlid": 1, 00:17:38.905 "max_cntlid": 65519, 00:17:38.905 "namespaces": [ 00:17:38.905 { 00:17:38.905 "nsid": 1, 00:17:38.905 "bdev_name": "Malloc1", 00:17:38.905 "name": "Malloc1", 00:17:38.905 "nguid": "62D8BED6AA63422F987AC4FF9A441846", 00:17:38.905 "uuid": "62d8bed6-aa63-422f-987a-c4ff9a441846" 00:17:38.905 }, 00:17:38.905 { 00:17:38.905 "nsid": 2, 00:17:38.905 "bdev_name": "Malloc3", 00:17:38.905 "name": "Malloc3", 00:17:38.905 "nguid": "D93CE9E648784D07AA08C72B6EC0DEB4", 00:17:38.905 "uuid": "d93ce9e6-4878-4d07-aa08-c72b6ec0deb4" 00:17:38.905 } 00:17:38.905 ] 00:17:38.905 }, 00:17:38.905 { 00:17:38.905 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:38.905 "subtype": "NVMe", 00:17:38.905 "listen_addresses": [ 00:17:38.905 { 00:17:38.905 "trtype": "VFIOUSER", 00:17:38.905 "adrfam": "IPv4", 00:17:38.905 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:38.905 "trsvcid": "0" 00:17:38.905 } 00:17:38.905 ], 00:17:38.905 "allow_any_host": true, 00:17:38.905 "hosts": [], 00:17:38.905 "serial_number": "SPDK2", 00:17:38.905 "model_number": "SPDK bdev Controller", 00:17:38.905 "max_namespaces": 32, 00:17:38.905 "min_cntlid": 1, 00:17:38.905 "max_cntlid": 65519, 00:17:38.905 "namespaces": [ 00:17:38.905 { 00:17:38.905 "nsid": 1, 00:17:38.905 "bdev_name": "Malloc2", 00:17:38.905 "name": "Malloc2", 00:17:38.905 "nguid": "EAB5EFD41303404F8DA9D8BDF789F3B9", 00:17:38.905 "uuid": "eab5efd4-1303-404f-8da9-d8bdf789f3b9" 00:17:38.905 } 00:17:38.905 ] 00:17:38.905 } 00:17:38.905 ] 00:17:38.905 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2308570 00:17:38.905 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:38.905 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:38.905 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:38.905 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:38.905 [2024-10-28 04:53:29.320728] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:17:38.906 [2024-10-28 04:53:29.320767] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2308700 ] 00:17:38.906 [2024-10-28 04:53:29.434175] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:38.906 [2024-10-28 04:53:29.468354] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:38.906 [2024-10-28 04:53:29.479922] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:38.906 [2024-10-28 04:53:29.479965] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fcafcbbf000 00:17:38.906 [2024-10-28 04:53:29.480920] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:38.906 [2024-10-28 04:53:29.481919] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:38.906 [2024-10-28 04:53:29.482940] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:38.906 [2024-10-28 04:53:29.483946] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:38.906 [2024-10-28 04:53:29.484955] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:38.906 [2024-10-28 04:53:29.485943] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:38.906 [2024-10-28 04:53:29.486951] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:38.906 [2024-10-28 04:53:29.487955] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:38.906 [2024-10-28 04:53:29.488974] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:38.906 [2024-10-28 04:53:29.488995] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fcafb8bf000 00:17:38.906 [2024-10-28 04:53:29.490111] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:39.172 [2024-10-28 04:53:29.507000] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:39.172 [2024-10-28 04:53:29.507036] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:17:39.172 [2024-10-28 04:53:29.509134] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:39.172 [2024-10-28 04:53:29.509183] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:39.172 [2024-10-28 04:53:29.509271] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:17:39.172 [2024-10-28 04:53:29.509296] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:17:39.172 [2024-10-28 04:53:29.509307] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:17:39.172 [2024-10-28 04:53:29.510138] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:39.172 [2024-10-28 04:53:29.510159] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:17:39.172 [2024-10-28 04:53:29.510172] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:17:39.172 [2024-10-28 04:53:29.511138] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:39.172 [2024-10-28 04:53:29.511158] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:17:39.172 [2024-10-28 04:53:29.511172] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:39.172 [2024-10-28 04:53:29.512139] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:39.172 [2024-10-28 04:53:29.512158] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:39.172 [2024-10-28 04:53:29.513145] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:39.172 [2024-10-28 04:53:29.513165] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:39.172 [2024-10-28 04:53:29.513174] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:39.172 [2024-10-28 04:53:29.513186] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:39.172 [2024-10-28 04:53:29.513295] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:17:39.172 [2024-10-28 04:53:29.513303] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:39.172 [2024-10-28 04:53:29.513311] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003a0000 00:17:39.172 [2024-10-28 04:53:29.514149] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003a6000 00:17:39.172 [2024-10-28 04:53:29.515155] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:39.172 [2024-10-28 04:53:29.516150] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:39.172 [2024-10-28 04:53:29.517152] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:39.172 [2024-10-28 04:53:29.517228] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:39.172 [2024-10-28 04:53:29.518168] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:39.172 [2024-10-28 04:53:29.518189] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:39.172 [2024-10-28 04:53:29.518199] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:39.172 [2024-10-28 04:53:29.518223] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:17:39.172 [2024-10-28 04:53:29.518237] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:39.172 [2024-10-28 04:53:29.518256] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002dd000 len:4096 00:17:39.172 [2024-10-28 04:53:29.518265] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002dd000 00:17:39.172 [2024-10-28 04:53:29.518272] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:39.172 [2024-10-28 04:53:29.518289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002dd000 PRP2 0x0 00:17:39.172 [2024-10-28 04:53:29.524664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:39.172 [2024-10-28 04:53:29.524686] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:17:39.172 [2024-10-28 04:53:29.524695] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:17:39.172 [2024-10-28 04:53:29.524702] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:17:39.172 [2024-10-28 04:53:29.524710] nvme_ctrlr.c:2072:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:39.173 [2024-10-28 04:53:29.524718] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:17:39.173 [2024-10-28 04:53:29.524725] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:17:39.173 [2024-10-28 04:53:29.524733] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.524745] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.524761] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:39.173 [2024-10-28 04:53:29.532646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:39.173 [2024-10-28 04:53:29.532675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.173 [2024-10-28 04:53:29.532689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.173 [2024-10-28 04:53:29.532701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.173 [2024-10-28 04:53:29.532713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.173 [2024-10-28 04:53:29.532722] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.532738] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.532752] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:39.173 [2024-10-28 04:53:29.540645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:39.173 [2024-10-28 04:53:29.540668] nvme_ctrlr.c:3011:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:17:39.173 [2024-10-28 04:53:29.540679] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.540691] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.540701] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.540714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:39.173 [2024-10-28 04:53:29.548644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:39.173 [2024-10-28 04:53:29.548720] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.548737] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.548750] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:4096 00:17:39.173 [2024-10-28 04:53:29.548759] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:17:39.173 [2024-10-28 04:53:29.548765] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:39.173 [2024-10-28 04:53:29.548774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:17:39.173 [2024-10-28 04:53:29.556644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:39.173 [2024-10-28 04:53:29.556666] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:17:39.173 [2024-10-28 04:53:29.556686] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.556702] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.556714] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002dd000 len:4096 00:17:39.173 [2024-10-28 04:53:29.556723] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002dd000 00:17:39.173 [2024-10-28 04:53:29.556729] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:39.173 [2024-10-28 04:53:29.556739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002dd000 PRP2 0x0 00:17:39.173 [2024-10-28 04:53:29.564646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:39.173 [2024-10-28 04:53:29.564676] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.564692] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.564709] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002dd000 len:4096 00:17:39.173 [2024-10-28 04:53:29.564718] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002dd000 00:17:39.173 [2024-10-28 04:53:29.564724] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:39.173 [2024-10-28 04:53:29.564734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002dd000 PRP2 0x0 00:17:39.173 [2024-10-28 04:53:29.572646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:39.173 [2024-10-28 04:53:29.572667] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.572680] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.572694] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.572705] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.572714] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.572723] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.572732] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:39.173 [2024-10-28 04:53:29.572740] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:17:39.173 [2024-10-28 04:53:29.572748] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:17:39.173 [2024-10-28 04:53:29.572772] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:39.173 [2024-10-28 04:53:29.580660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:39.173 [2024-10-28 04:53:29.580686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:39.173 [2024-10-28 04:53:29.588648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:39.173 [2024-10-28 04:53:29.588673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:39.173 [2024-10-28 04:53:29.596648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:39.173 [2024-10-28 04:53:29.596673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:39.173 [2024-10-28 04:53:29.604646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:39.173 [2024-10-28 04:53:29.604676] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002d8000 len:8192 00:17:39.173 [2024-10-28 04:53:29.604687] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002d8000 00:17:39.173 [2024-10-28 04:53:29.604693] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002d9000 00:17:39.173 [2024-10-28 04:53:29.604699] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002d9000 00:17:39.173 [2024-10-28 04:53:29.604708] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:39.173 [2024-10-28 04:53:29.604719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002d8000 PRP2 0x2000002d9000 00:17:39.173 [2024-10-28 04:53:29.604731] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002de000 len:512 00:17:39.173 [2024-10-28 04:53:29.604739] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002de000 00:17:39.173 [2024-10-28 04:53:29.604745] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:39.173 [2024-10-28 04:53:29.604754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002de000 PRP2 0x0 00:17:39.173 [2024-10-28 04:53:29.604765] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002dd000 len:512 00:17:39.173 [2024-10-28 04:53:29.604772] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002dd000 00:17:39.173 [2024-10-28 04:53:29.604778] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:39.173 [2024-10-28 04:53:29.604787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002dd000 PRP2 0x0 00:17:39.173 [2024-10-28 04:53:29.604802] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002d6000 len:4096 00:17:39.173 [2024-10-28 04:53:29.604811] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002d6000 00:17:39.173 [2024-10-28 04:53:29.604817] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:39.173 [2024-10-28 04:53:29.604826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002d6000 PRP2 0x0 00:17:39.173 [2024-10-28 04:53:29.612649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:39.173 [2024-10-28 04:53:29.612676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:39.173 [2024-10-28 04:53:29.612694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:39.173 [2024-10-28 04:53:29.612706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:39.173 ===================================================== 00:17:39.173 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:39.173 ===================================================== 00:17:39.173 Controller Capabilities/Features 00:17:39.173 ================================ 00:17:39.173 Vendor ID: 4e58 00:17:39.173 Subsystem Vendor ID: 4e58 00:17:39.173 Serial Number: SPDK2 00:17:39.173 Model Number: SPDK bdev Controller 00:17:39.173 Firmware Version: 25.01 00:17:39.173 Recommended Arb Burst: 6 00:17:39.173 IEEE OUI Identifier: 8d 6b 50 00:17:39.173 Multi-path I/O 00:17:39.173 May have multiple subsystem ports: Yes 00:17:39.173 May have multiple controllers: Yes 00:17:39.173 Associated with SR-IOV VF: No 00:17:39.173 Max Data Transfer Size: 131072 00:17:39.173 Max Number of Namespaces: 32 00:17:39.173 Max Number of I/O Queues: 127 00:17:39.173 NVMe Specification Version (VS): 1.3 00:17:39.173 NVMe Specification Version (Identify): 1.3 00:17:39.173 Maximum Queue Entries: 256 00:17:39.173 Contiguous Queues Required: Yes 00:17:39.173 Arbitration Mechanisms Supported 00:17:39.173 Weighted Round Robin: Not Supported 00:17:39.173 Vendor Specific: Not Supported 00:17:39.173 Reset Timeout: 15000 ms 00:17:39.173 Doorbell Stride: 4 bytes 00:17:39.173 NVM Subsystem Reset: Not Supported 00:17:39.173 Command Sets Supported 00:17:39.173 NVM Command Set: Supported 00:17:39.173 Boot Partition: Not Supported 00:17:39.173 Memory Page Size Minimum: 4096 bytes 00:17:39.173 Memory Page Size Maximum: 4096 bytes 00:17:39.173 Persistent Memory Region: Not Supported 00:17:39.173 Optional Asynchronous Events Supported 00:17:39.173 Namespace Attribute Notices: Supported 00:17:39.173 Firmware Activation Notices: Not Supported 00:17:39.173 ANA Change Notices: Not Supported 00:17:39.173 PLE Aggregate Log Change Notices: Not Supported 00:17:39.174 LBA Status Info Alert Notices: Not Supported 00:17:39.174 EGE Aggregate Log Change Notices: Not Supported 00:17:39.174 Normal NVM Subsystem Shutdown event: Not Supported 00:17:39.174 Zone Descriptor Change Notices: Not Supported 00:17:39.174 Discovery Log Change Notices: Not Supported 00:17:39.174 Controller Attributes 00:17:39.174 128-bit Host Identifier: Supported 00:17:39.174 Non-Operational Permissive Mode: Not Supported 00:17:39.174 NVM Sets: Not Supported 00:17:39.174 Read Recovery Levels: Not Supported 00:17:39.174 Endurance Groups: Not Supported 00:17:39.174 Predictable Latency Mode: Not Supported 00:17:39.174 Traffic Based Keep ALive: Not Supported 00:17:39.174 Namespace Granularity: Not Supported 00:17:39.174 SQ Associations: Not Supported 00:17:39.174 UUID List: Not Supported 00:17:39.174 Multi-Domain Subsystem: Not Supported 00:17:39.174 Fixed Capacity Management: Not Supported 00:17:39.174 Variable Capacity Management: Not Supported 00:17:39.174 Delete Endurance Group: Not Supported 00:17:39.174 Delete NVM Set: Not Supported 00:17:39.174 Extended LBA Formats Supported: Not Supported 00:17:39.174 Flexible Data Placement Supported: Not Supported 00:17:39.174 00:17:39.174 Controller Memory Buffer Support 00:17:39.174 ================================ 00:17:39.174 Supported: No 00:17:39.174 00:17:39.174 Persistent Memory Region Support 00:17:39.174 ================================ 00:17:39.174 Supported: No 00:17:39.174 00:17:39.174 Admin Command Set Attributes 00:17:39.174 ============================ 00:17:39.174 Security Send/Receive: Not Supported 00:17:39.174 Format NVM: Not Supported 00:17:39.174 Firmware Activate/Download: Not Supported 00:17:39.174 Namespace Management: Not Supported 00:17:39.174 Device Self-Test: Not Supported 00:17:39.174 Directives: Not Supported 00:17:39.174 NVMe-MI: Not Supported 00:17:39.174 Virtualization Management: Not Supported 00:17:39.174 Doorbell Buffer Config: Not Supported 00:17:39.174 Get LBA Status Capability: Not Supported 00:17:39.174 Command & Feature Lockdown Capability: Not Supported 00:17:39.174 Abort Command Limit: 4 00:17:39.174 Async Event Request Limit: 4 00:17:39.174 Number of Firmware Slots: N/A 00:17:39.174 Firmware Slot 1 Read-Only: N/A 00:17:39.174 Firmware Activation Without Reset: N/A 00:17:39.174 Multiple Update Detection Support: N/A 00:17:39.174 Firmware Update Granularity: No Information Provided 00:17:39.174 Per-Namespace SMART Log: No 00:17:39.174 Asymmetric Namespace Access Log Page: Not Supported 00:17:39.174 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:39.174 Command Effects Log Page: Supported 00:17:39.174 Get Log Page Extended Data: Supported 00:17:39.174 Telemetry Log Pages: Not Supported 00:17:39.174 Persistent Event Log Pages: Not Supported 00:17:39.174 Supported Log Pages Log Page: May Support 00:17:39.174 Commands Supported & Effects Log Page: Not Supported 00:17:39.174 Feature Identifiers & Effects Log Page:May Support 00:17:39.174 NVMe-MI Commands & Effects Log Page: May Support 00:17:39.174 Data Area 4 for Telemetry Log: Not Supported 00:17:39.174 Error Log Page Entries Supported: 128 00:17:39.174 Keep Alive: Supported 00:17:39.174 Keep Alive Granularity: 10000 ms 00:17:39.174 00:17:39.174 NVM Command Set Attributes 00:17:39.174 ========================== 00:17:39.174 Submission Queue Entry Size 00:17:39.174 Max: 64 00:17:39.174 Min: 64 00:17:39.174 Completion Queue Entry Size 00:17:39.174 Max: 16 00:17:39.174 Min: 16 00:17:39.174 Number of Namespaces: 32 00:17:39.174 Compare Command: Supported 00:17:39.174 Write Uncorrectable Command: Not Supported 00:17:39.174 Dataset Management Command: Supported 00:17:39.174 Write Zeroes Command: Supported 00:17:39.174 Set Features Save Field: Not Supported 00:17:39.174 Reservations: Not Supported 00:17:39.174 Timestamp: Not Supported 00:17:39.174 Copy: Supported 00:17:39.174 Volatile Write Cache: Present 00:17:39.174 Atomic Write Unit (Normal): 1 00:17:39.174 Atomic Write Unit (PFail): 1 00:17:39.174 Atomic Compare & Write Unit: 1 00:17:39.174 Fused Compare & Write: Supported 00:17:39.174 Scatter-Gather List 00:17:39.174 SGL Command Set: Supported (Dword aligned) 00:17:39.174 SGL Keyed: Not Supported 00:17:39.174 SGL Bit Bucket Descriptor: Not Supported 00:17:39.174 SGL Metadata Pointer: Not Supported 00:17:39.174 Oversized SGL: Not Supported 00:17:39.174 SGL Metadata Address: Not Supported 00:17:39.174 SGL Offset: Not Supported 00:17:39.174 Transport SGL Data Block: Not Supported 00:17:39.174 Replay Protected Memory Block: Not Supported 00:17:39.174 00:17:39.174 Firmware Slot Information 00:17:39.174 ========================= 00:17:39.174 Active slot: 1 00:17:39.174 Slot 1 Firmware Revision: 25.01 00:17:39.174 00:17:39.174 00:17:39.174 Commands Supported and Effects 00:17:39.174 ============================== 00:17:39.174 Admin Commands 00:17:39.174 -------------- 00:17:39.174 Get Log Page (02h): Supported 00:17:39.174 Identify (06h): Supported 00:17:39.174 Abort (08h): Supported 00:17:39.174 Set Features (09h): Supported 00:17:39.174 Get Features (0Ah): Supported 00:17:39.174 Asynchronous Event Request (0Ch): Supported 00:17:39.174 Keep Alive (18h): Supported 00:17:39.174 I/O Commands 00:17:39.175 ------------ 00:17:39.175 Flush (00h): Supported LBA-Change 00:17:39.175 Write (01h): Supported LBA-Change 00:17:39.175 Read (02h): Supported 00:17:39.175 Compare (05h): Supported 00:17:39.175 Write Zeroes (08h): Supported LBA-Change 00:17:39.175 Dataset Management (09h): Supported LBA-Change 00:17:39.175 Copy (19h): Supported LBA-Change 00:17:39.175 00:17:39.175 Error Log 00:17:39.175 ========= 00:17:39.175 00:17:39.175 Arbitration 00:17:39.175 =========== 00:17:39.175 Arbitration Burst: 1 00:17:39.175 00:17:39.175 Power Management 00:17:39.175 ================ 00:17:39.175 Number of Power States: 1 00:17:39.175 Current Power State: Power State #0 00:17:39.175 Power State #0: 00:17:39.175 Max Power: 0.00 W 00:17:39.175 Non-Operational State: Operational 00:17:39.175 Entry Latency: Not Reported 00:17:39.175 Exit Latency: Not Reported 00:17:39.175 Relative Read Throughput: 0 00:17:39.175 Relative Read Latency: 0 00:17:39.175 Relative Write Throughput: 0 00:17:39.175 Relative Write Latency: 0 00:17:39.175 Idle Power: Not Reported 00:17:39.175 Active Power: Not Reported 00:17:39.175 Non-Operational Permissive Mode: Not Supported 00:17:39.175 00:17:39.175 Health Information 00:17:39.175 ================== 00:17:39.175 Critical Warnings: 00:17:39.175 Available Spare Space: OK 00:17:39.175 Temperature: OK 00:17:39.175 Device Reliability: OK 00:17:39.175 Read Only: No 00:17:39.175 Volatile Memory Backup: OK 00:17:39.175 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:39.175 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:39.175 Available Spare: 0% 00:17:39.175 Available Sp[2024-10-28 04:53:29.612834] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:39.175 [2024-10-28 04:53:29.620647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:39.175 [2024-10-28 04:53:29.620712] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:17:39.175 [2024-10-28 04:53:29.620730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.175 [2024-10-28 04:53:29.620741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.175 [2024-10-28 04:53:29.620751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.175 [2024-10-28 04:53:29.620760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.175 [2024-10-28 04:53:29.620842] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:39.175 [2024-10-28 04:53:29.620862] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:39.175 [2024-10-28 04:53:29.621832] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:39.175 [2024-10-28 04:53:29.621914] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:17:39.175 [2024-10-28 04:53:29.621934] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:17:39.175 [2024-10-28 04:53:29.622847] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:39.175 [2024-10-28 04:53:29.622871] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:17:39.175 [2024-10-28 04:53:29.622923] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:39.175 [2024-10-28 04:53:29.624111] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:39.175 are Threshold: 0% 00:17:39.175 Life Percentage Used: 0% 00:17:39.175 Data Units Read: 0 00:17:39.175 Data Units Written: 0 00:17:39.175 Host Read Commands: 0 00:17:39.175 Host Write Commands: 0 00:17:39.175 Controller Busy Time: 0 minutes 00:17:39.175 Power Cycles: 0 00:17:39.175 Power On Hours: 0 hours 00:17:39.175 Unsafe Shutdowns: 0 00:17:39.175 Unrecoverable Media Errors: 0 00:17:39.175 Lifetime Error Log Entries: 0 00:17:39.175 Warning Temperature Time: 0 minutes 00:17:39.175 Critical Temperature Time: 0 minutes 00:17:39.175 00:17:39.175 Number of Queues 00:17:39.175 ================ 00:17:39.175 Number of I/O Submission Queues: 127 00:17:39.175 Number of I/O Completion Queues: 127 00:17:39.175 00:17:39.175 Active Namespaces 00:17:39.175 ================= 00:17:39.175 Namespace ID:1 00:17:39.175 Error Recovery Timeout: Unlimited 00:17:39.175 Command Set Identifier: NVM (00h) 00:17:39.175 Deallocate: Supported 00:17:39.175 Deallocated/Unwritten Error: Not Supported 00:17:39.175 Deallocated Read Value: Unknown 00:17:39.175 Deallocate in Write Zeroes: Not Supported 00:17:39.175 Deallocated Guard Field: 0xFFFF 00:17:39.175 Flush: Supported 00:17:39.175 Reservation: Supported 00:17:39.175 Namespace Sharing Capabilities: Multiple Controllers 00:17:39.175 Size (in LBAs): 131072 (0GiB) 00:17:39.175 Capacity (in LBAs): 131072 (0GiB) 00:17:39.175 Utilization (in LBAs): 131072 (0GiB) 00:17:39.175 NGUID: EAB5EFD41303404F8DA9D8BDF789F3B9 00:17:39.175 UUID: eab5efd4-1303-404f-8da9-d8bdf789f3b9 00:17:39.175 Thin Provisioning: Not Supported 00:17:39.175 Per-NS Atomic Units: Yes 00:17:39.175 Atomic Boundary Size (Normal): 0 00:17:39.175 Atomic Boundary Size (PFail): 0 00:17:39.175 Atomic Boundary Offset: 0 00:17:39.175 Maximum Single Source Range Length: 65535 00:17:39.175 Maximum Copy Length: 65535 00:17:39.175 Maximum Source Range Count: 1 00:17:39.175 NGUID/EUI64 Never Reused: No 00:17:39.175 Namespace Write Protected: No 00:17:39.175 Number of LBA Formats: 1 00:17:39.175 Current LBA Format: LBA Format #00 00:17:39.175 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:39.175 00:17:39.175 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:39.442 [2024-10-28 04:53:29.976047] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:44.704 Initializing NVMe Controllers 00:17:44.704 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:44.704 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:44.705 Initialization complete. Launching workers. 00:17:44.705 ======================================================== 00:17:44.705 Latency(us) 00:17:44.705 Device Information : IOPS MiB/s Average min max 00:17:44.705 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33196.00 129.67 3857.14 1200.75 8676.35 00:17:44.705 ======================================================== 00:17:44.705 Total : 33196.00 129.67 3857.14 1200.75 8676.35 00:17:44.705 00:17:44.705 [2024-10-28 04:53:35.072034] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:44.705 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:44.963 [2024-10-28 04:53:35.425326] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:50.321 Initializing NVMe Controllers 00:17:50.321 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:50.321 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:50.321 Initialization complete. Launching workers. 00:17:50.321 ======================================================== 00:17:50.321 Latency(us) 00:17:50.321 Device Information : IOPS MiB/s Average min max 00:17:50.321 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30574.56 119.43 4185.73 1224.76 8283.70 00:17:50.321 ======================================================== 00:17:50.321 Total : 30574.56 119.43 4185.73 1224.76 8283.70 00:17:50.321 00:17:50.321 [2024-10-28 04:53:40.439168] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:50.321 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:50.321 [2024-10-28 04:53:40.769597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:55.611 [2024-10-28 04:53:45.888782] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:55.611 Initializing NVMe Controllers 00:17:55.612 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:55.612 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:55.612 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:55.612 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:55.612 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:55.612 Initialization complete. Launching workers. 00:17:55.612 Starting thread on core 2 00:17:55.612 Starting thread on core 3 00:17:55.612 Starting thread on core 1 00:17:55.612 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:55.870 [2024-10-28 04:53:46.314039] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:59.153 [2024-10-28 04:53:49.376962] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:59.153 Initializing NVMe Controllers 00:17:59.153 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:59.153 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:59.153 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:59.153 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:59.153 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:59.153 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:59.153 Initialization complete. Launching workers. 00:17:59.153 Starting thread on core 1 with urgent priority queue 00:17:59.153 Starting thread on core 2 with urgent priority queue 00:17:59.153 Starting thread on core 3 with urgent priority queue 00:17:59.153 Starting thread on core 0 with urgent priority queue 00:17:59.153 SPDK bdev Controller (SPDK2 ) core 0: 4354.00 IO/s 22.97 secs/100000 ios 00:17:59.153 SPDK bdev Controller (SPDK2 ) core 1: 5683.67 IO/s 17.59 secs/100000 ios 00:17:59.153 SPDK bdev Controller (SPDK2 ) core 2: 5839.33 IO/s 17.13 secs/100000 ios 00:17:59.153 SPDK bdev Controller (SPDK2 ) core 3: 5934.00 IO/s 16.85 secs/100000 ios 00:17:59.153 ======================================================== 00:17:59.153 00:17:59.153 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:59.410 [2024-10-28 04:53:49.792039] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:59.410 Initializing NVMe Controllers 00:17:59.410 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:59.410 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:59.410 Namespace ID: 1 size: 0GB 00:17:59.410 Initialization complete. 00:17:59.410 INFO: using host memory buffer for IO 00:17:59.410 Hello world! 00:17:59.410 [2024-10-28 04:53:49.802153] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:59.410 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:59.666 [2024-10-28 04:53:50.219850] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:01.038 Initializing NVMe Controllers 00:18:01.038 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:01.038 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:01.038 Initialization complete. Launching workers. 00:18:01.038 submit (in ns) avg, min, max = 7685.4, 3481.7, 4025786.2 00:18:01.038 complete (in ns) avg, min, max = 27215.4, 2065.0, 4175817.3 00:18:01.038 00:18:01.038 Submit histogram 00:18:01.038 ================ 00:18:01.038 Range in us Cumulative Count 00:18:01.038 3.469 - 3.493: 0.0077% ( 1) 00:18:01.038 3.493 - 3.517: 0.0154% ( 1) 00:18:01.038 3.517 - 3.540: 0.3613% ( 45) 00:18:01.038 3.540 - 3.564: 1.7988% ( 187) 00:18:01.038 3.564 - 3.588: 4.7736% ( 387) 00:18:01.038 3.588 - 3.612: 9.9623% ( 675) 00:18:01.038 3.612 - 3.635: 19.9631% ( 1301) 00:18:01.038 3.635 - 3.659: 29.9869% ( 1304) 00:18:01.038 3.659 - 3.683: 38.5656% ( 1116) 00:18:01.038 3.683 - 3.707: 44.7767% ( 808) 00:18:01.038 3.707 - 3.730: 51.1953% ( 835) 00:18:01.038 3.730 - 3.754: 56.1227% ( 641) 00:18:01.038 3.754 - 3.778: 60.3352% ( 548) 00:18:01.038 3.778 - 3.802: 63.8404% ( 456) 00:18:01.038 3.802 - 3.826: 66.9306% ( 402) 00:18:01.038 3.826 - 3.849: 70.3513% ( 445) 00:18:01.038 3.849 - 3.873: 74.4715% ( 536) 00:18:01.038 3.873 - 3.897: 78.6456% ( 543) 00:18:01.038 3.897 - 3.921: 82.4583% ( 496) 00:18:01.038 3.921 - 3.944: 85.4178% ( 385) 00:18:01.038 3.944 - 3.968: 87.2934% ( 244) 00:18:01.038 3.968 - 3.992: 89.0614% ( 230) 00:18:01.038 3.992 - 4.016: 90.4989% ( 187) 00:18:01.038 4.016 - 4.039: 91.6827% ( 154) 00:18:01.038 4.039 - 4.063: 92.6128% ( 121) 00:18:01.038 4.063 - 4.087: 93.5045% ( 116) 00:18:01.038 4.087 - 4.111: 94.2578% ( 98) 00:18:01.038 4.111 - 4.134: 94.9958% ( 96) 00:18:01.038 4.134 - 4.158: 95.4570% ( 60) 00:18:01.038 4.158 - 4.182: 95.8029% ( 45) 00:18:01.038 4.182 - 4.206: 96.0720% ( 35) 00:18:01.038 4.206 - 4.229: 96.2411% ( 22) 00:18:01.038 4.229 - 4.253: 96.3871% ( 19) 00:18:01.038 4.253 - 4.277: 96.4947% ( 14) 00:18:01.038 4.277 - 4.301: 96.5947% ( 13) 00:18:01.038 4.301 - 4.324: 96.6715% ( 10) 00:18:01.038 4.324 - 4.348: 96.7638% ( 12) 00:18:01.038 4.348 - 4.372: 96.7868% ( 3) 00:18:01.038 4.372 - 4.396: 96.8406% ( 7) 00:18:01.038 4.396 - 4.420: 96.8714% ( 4) 00:18:01.038 4.420 - 4.443: 96.8868% ( 2) 00:18:01.038 4.443 - 4.467: 96.8945% ( 1) 00:18:01.038 4.467 - 4.491: 96.9021% ( 1) 00:18:01.038 4.633 - 4.657: 96.9098% ( 1) 00:18:01.038 4.657 - 4.681: 96.9406% ( 4) 00:18:01.038 4.728 - 4.752: 96.9483% ( 1) 00:18:01.038 4.776 - 4.800: 96.9790% ( 4) 00:18:01.038 4.800 - 4.823: 97.0098% ( 4) 00:18:01.038 4.823 - 4.847: 97.0405% ( 4) 00:18:01.038 4.847 - 4.871: 97.1481% ( 14) 00:18:01.038 4.871 - 4.895: 97.2327% ( 11) 00:18:01.038 4.895 - 4.919: 97.3096% ( 10) 00:18:01.038 4.919 - 4.942: 97.3787% ( 9) 00:18:01.038 4.942 - 4.966: 97.4249% ( 6) 00:18:01.038 4.966 - 4.990: 97.4556% ( 4) 00:18:01.038 4.990 - 5.014: 97.4940% ( 5) 00:18:01.038 5.014 - 5.037: 97.5171% ( 3) 00:18:01.038 5.037 - 5.061: 97.5479% ( 4) 00:18:01.038 5.061 - 5.085: 97.5786% ( 4) 00:18:01.038 5.085 - 5.109: 97.5940% ( 2) 00:18:01.038 5.109 - 5.132: 97.6247% ( 4) 00:18:01.038 5.132 - 5.156: 97.6862% ( 8) 00:18:01.038 5.156 - 5.180: 97.6939% ( 1) 00:18:01.038 5.180 - 5.204: 97.7093% ( 2) 00:18:01.038 5.204 - 5.227: 97.7247% ( 2) 00:18:01.038 5.227 - 5.251: 97.7477% ( 3) 00:18:01.038 5.251 - 5.275: 97.7708% ( 3) 00:18:01.038 5.275 - 5.299: 97.7861% ( 2) 00:18:01.038 5.322 - 5.346: 97.7938% ( 1) 00:18:01.038 5.346 - 5.370: 97.8092% ( 2) 00:18:01.038 5.370 - 5.394: 97.8169% ( 1) 00:18:01.038 5.394 - 5.417: 97.8246% ( 1) 00:18:01.038 5.465 - 5.489: 97.8323% ( 1) 00:18:01.038 5.513 - 5.536: 97.8400% ( 1) 00:18:01.038 5.703 - 5.726: 97.8476% ( 1) 00:18:01.038 5.726 - 5.750: 97.8630% ( 2) 00:18:01.038 5.774 - 5.798: 97.8707% ( 1) 00:18:01.038 6.130 - 6.178: 97.8784% ( 1) 00:18:01.038 6.178 - 6.225: 97.8938% ( 2) 00:18:01.038 6.225 - 6.273: 97.9015% ( 1) 00:18:01.038 6.273 - 6.320: 97.9091% ( 1) 00:18:01.038 6.320 - 6.368: 97.9322% ( 3) 00:18:01.038 6.415 - 6.463: 97.9476% ( 2) 00:18:01.038 6.558 - 6.606: 97.9629% ( 2) 00:18:01.038 6.606 - 6.653: 97.9706% ( 1) 00:18:01.038 6.701 - 6.748: 97.9783% ( 1) 00:18:01.038 6.843 - 6.891: 97.9860% ( 1) 00:18:01.038 6.891 - 6.938: 98.0014% ( 2) 00:18:01.038 6.938 - 6.986: 98.0091% ( 1) 00:18:01.038 7.128 - 7.176: 98.0168% ( 1) 00:18:01.038 7.223 - 7.271: 98.0244% ( 1) 00:18:01.038 7.366 - 7.413: 98.0321% ( 1) 00:18:01.038 7.413 - 7.461: 98.0398% ( 1) 00:18:01.038 7.508 - 7.556: 98.0552% ( 2) 00:18:01.038 7.556 - 7.603: 98.0859% ( 4) 00:18:01.038 7.603 - 7.651: 98.0936% ( 1) 00:18:01.038 7.699 - 7.746: 98.1013% ( 1) 00:18:01.038 7.841 - 7.889: 98.1090% ( 1) 00:18:01.038 7.889 - 7.936: 98.1244% ( 2) 00:18:01.038 7.936 - 7.984: 98.1397% ( 2) 00:18:01.038 7.984 - 8.031: 98.1474% ( 1) 00:18:01.038 8.079 - 8.126: 98.1551% ( 1) 00:18:01.038 8.126 - 8.174: 98.1628% ( 1) 00:18:01.038 8.174 - 8.221: 98.1705% ( 1) 00:18:01.038 8.221 - 8.269: 98.1859% ( 2) 00:18:01.038 8.316 - 8.364: 98.1936% ( 1) 00:18:01.038 8.364 - 8.411: 98.2089% ( 2) 00:18:01.038 8.411 - 8.459: 98.2320% ( 3) 00:18:01.038 8.554 - 8.601: 98.2551% ( 3) 00:18:01.038 8.649 - 8.696: 98.2627% ( 1) 00:18:01.038 8.696 - 8.744: 98.2781% ( 2) 00:18:01.038 8.744 - 8.792: 98.2858% ( 1) 00:18:01.038 8.792 - 8.839: 98.3012% ( 2) 00:18:01.038 8.839 - 8.887: 98.3166% ( 2) 00:18:01.038 8.887 - 8.934: 98.3242% ( 1) 00:18:01.038 8.934 - 8.982: 98.3319% ( 1) 00:18:01.038 8.982 - 9.029: 98.3396% ( 1) 00:18:01.038 9.029 - 9.077: 98.3473% ( 1) 00:18:01.038 9.077 - 9.124: 98.3550% ( 1) 00:18:01.038 9.219 - 9.267: 98.3780% ( 3) 00:18:01.038 9.267 - 9.314: 98.3934% ( 2) 00:18:01.038 9.362 - 9.409: 98.4165% ( 3) 00:18:01.038 9.457 - 9.504: 98.4395% ( 3) 00:18:01.038 9.504 - 9.552: 98.4549% ( 2) 00:18:01.038 9.647 - 9.694: 98.4626% ( 1) 00:18:01.038 9.694 - 9.742: 98.4703% ( 1) 00:18:01.038 10.217 - 10.265: 98.4857% ( 2) 00:18:01.038 10.265 - 10.312: 98.5010% ( 2) 00:18:01.038 10.312 - 10.360: 98.5087% ( 1) 00:18:01.038 10.407 - 10.455: 98.5164% ( 1) 00:18:01.038 10.455 - 10.502: 98.5241% ( 1) 00:18:01.038 10.502 - 10.550: 98.5395% ( 2) 00:18:01.038 10.597 - 10.645: 98.5472% ( 1) 00:18:01.038 10.787 - 10.835: 98.5548% ( 1) 00:18:01.038 10.882 - 10.930: 98.5625% ( 1) 00:18:01.038 11.025 - 11.073: 98.5702% ( 1) 00:18:01.038 11.120 - 11.168: 98.5779% ( 1) 00:18:01.038 11.168 - 11.215: 98.5856% ( 1) 00:18:01.038 11.263 - 11.310: 98.6010% ( 2) 00:18:01.039 11.453 - 11.500: 98.6087% ( 1) 00:18:01.039 11.500 - 11.548: 98.6163% ( 1) 00:18:01.039 11.548 - 11.595: 98.6240% ( 1) 00:18:01.039 11.595 - 11.643: 98.6394% ( 2) 00:18:01.039 11.738 - 11.785: 98.6548% ( 2) 00:18:01.039 11.880 - 11.928: 98.6625% ( 1) 00:18:01.039 12.023 - 12.071: 98.6855% ( 3) 00:18:01.039 12.166 - 12.261: 98.7009% ( 2) 00:18:01.039 12.261 - 12.356: 98.7086% ( 1) 00:18:01.039 12.356 - 12.451: 98.7240% ( 2) 00:18:01.039 12.451 - 12.546: 98.7316% ( 1) 00:18:01.039 12.546 - 12.641: 98.7470% ( 2) 00:18:01.039 12.736 - 12.831: 98.7547% ( 1) 00:18:01.039 12.926 - 13.021: 98.7701% ( 2) 00:18:01.039 13.401 - 13.496: 98.7855% ( 2) 00:18:01.039 13.496 - 13.591: 98.7931% ( 1) 00:18:01.039 13.591 - 13.686: 98.8008% ( 1) 00:18:01.039 13.781 - 13.876: 98.8239% ( 3) 00:18:01.039 13.876 - 13.971: 98.8316% ( 1) 00:18:01.039 14.066 - 14.161: 98.8470% ( 2) 00:18:01.039 14.161 - 14.257: 98.8546% ( 1) 00:18:01.039 14.257 - 14.352: 98.8623% ( 1) 00:18:01.039 14.352 - 14.447: 98.8700% ( 1) 00:18:01.039 14.447 - 14.542: 98.8777% ( 1) 00:18:01.039 14.542 - 14.637: 98.8931% ( 2) 00:18:01.039 14.637 - 14.732: 98.9008% ( 1) 00:18:01.039 14.732 - 14.827: 98.9084% ( 1) 00:18:01.039 14.827 - 14.922: 98.9315% ( 3) 00:18:01.039 14.922 - 15.017: 98.9392% ( 1) 00:18:01.039 15.017 - 15.112: 98.9469% ( 1) 00:18:01.039 15.207 - 15.302: 98.9546% ( 1) 00:18:01.039 16.157 - 16.252: 98.9623% ( 1) 00:18:01.039 17.298 - 17.393: 98.9699% ( 1) 00:18:01.039 17.393 - 17.488: 99.0007% ( 4) 00:18:01.039 17.488 - 17.583: 99.0391% ( 5) 00:18:01.039 17.583 - 17.678: 99.0852% ( 6) 00:18:01.039 17.678 - 17.773: 99.1467% ( 8) 00:18:01.039 17.773 - 17.868: 99.2006% ( 7) 00:18:01.039 17.868 - 17.963: 99.2544% ( 7) 00:18:01.039 17.963 - 18.058: 99.3005% ( 6) 00:18:01.039 18.058 - 18.153: 99.3697% ( 9) 00:18:01.039 18.153 - 18.248: 99.4465% ( 10) 00:18:01.039 18.248 - 18.343: 99.5003% ( 7) 00:18:01.039 18.343 - 18.438: 99.5388% ( 5) 00:18:01.039 18.438 - 18.534: 99.5772% ( 5) 00:18:01.039 18.534 - 18.629: 99.6080% ( 4) 00:18:01.039 18.629 - 18.724: 99.6310% ( 3) 00:18:01.039 18.724 - 18.819: 99.6771% ( 6) 00:18:01.039 18.819 - 18.914: 99.7540% ( 10) 00:18:01.039 18.914 - 19.009: 99.7771% ( 3) 00:18:01.039 19.009 - 19.104: 99.8001% ( 3) 00:18:01.039 19.104 - 19.199: 99.8155% ( 2) 00:18:01.039 19.199 - 19.294: 99.8232% ( 1) 00:18:01.039 19.294 - 19.389: 99.8309% ( 1) 00:18:01.039 19.674 - 19.769: 99.8386% ( 1) 00:18:01.039 20.720 - 20.815: 99.8463% ( 1) 00:18:01.039 22.145 - 22.240: 99.8539% ( 1) 00:18:01.039 22.430 - 22.525: 99.8616% ( 1) 00:18:01.039 22.620 - 22.715: 99.8693% ( 1) 00:18:01.039 27.182 - 27.373: 99.8770% ( 1) 00:18:01.039 28.323 - 28.513: 99.8847% ( 1) 00:18:01.039 28.703 - 28.893: 99.8924% ( 1) 00:18:01.039 31.364 - 31.554: 99.9001% ( 1) 00:18:01.039 32.505 - 32.695: 99.9078% ( 1) 00:18:01.039 3990.311 - 4014.643: 99.9769% ( 9) 00:18:01.039 4014.643 - 4038.974: 100.0000% ( 3) 00:18:01.039 00:18:01.039 Complete histogram 00:18:01.039 ================== 00:18:01.039 Range in us Cumulative Count 00:18:01.039 2.055 - 2.067: 0.0461% ( 6) 00:18:01.039 2.067 - 2.079: 12.3146% ( 1596) 00:18:01.039 2.079 - 2.091: 32.0701% ( 2570) 00:18:01.039 2.091 - 2.103: 33.9380% ( 243) 00:18:01.039 2.103 - 2.115: 50.2883% ( 2127) 00:18:01.039 2.115 - 2.127: 58.7978% ( 1107) 00:18:01.039 2.127 - 2.138: 60.6964% ( 247) 00:18:01.039 2.138 - 2.150: 68.0221% ( 953) 00:18:01.039 2.150 - 2.162: 72.0040% ( 518) 00:18:01.039 2.162 - 2.174: 73.4568% ( 189) 00:18:01.039 2.174 - 2.186: 78.9146% ( 710) 00:18:01.039 2.186 - 2.198: 80.8902% ( 257) 00:18:01.039 2.198 - 2.210: 81.7280% ( 109) 00:18:01.039 2.210 - 2.222: 84.9719% ( 422) 00:18:01.039 2.222 - 2.234: 87.7085% ( 356) 00:18:01.039 2.234 - 2.245: 89.7148% ( 261) 00:18:01.039 2.245 - 2.257: 92.0363% ( 302) 00:18:01.039 2.257 - 2.269: 93.3123% ( 166) 00:18:01.039 2.269 - 2.281: 93.6967% ( 50) 00:18:01.039 2.281 - 2.293: 94.2117% ( 67) 00:18:01.039 2.293 - 2.305: 94.7037% ( 64) 00:18:01.039 2.305 - 2.317: 95.3571% ( 85) 00:18:01.039 2.317 - 2.329: 95.5492% ( 25) 00:18:01.039 2.329 - 2.340: 95.6184% ( 9) 00:18:01.039 2.340 - 2.352: 95.6799% ( 8) 00:18:01.039 2.352 - 2.364: 95.8490% ( 22) 00:18:01.039 2.364 - 2.376: 96.1104% ( 34) 00:18:01.039 2.376 - 2.388: 96.3717% ( 34) 00:18:01.039 2.388 - 2.400: 96.6869% ( 41) 00:18:01.039 2.400 - 2.412: 96.9560% ( 35) 00:18:01.039 2.412 - 2.424: 97.1097% ( 20) 00:18:01.039 2.424 - 2.435: 97.3326% ( 29) 00:18:01.039 2.435 - 2.447: 97.4864% ( 20) 00:18:01.039 2.447 - 2.459: 97.6555% ( 22) 00:18:01.039 2.459 - 2.471: 97.8784% ( 29) 00:18:01.039 2.471 - 2.483: 98.0014% ( 16) 00:18:01.039 2.483 - 2.495: 98.0936% ( 12) 00:18:01.039 2.495 - 2.507: 98.1321% ( 5) 00:18:01.039 2.507 - 2.519: 98.1705% ( 5) 00:18:01.039 2.519 - 2.531: 98.2243% ( 7) 00:18:01.039 2.531 - 2.542: 98.2627% ( 5) 00:18:01.039 2.542 - 2.554: 98.2858% ( 3) 00:18:01.039 2.554 - 2.566: 98.3242% ( 5) 00:18:01.039 2.566 - 2.578: 98.3396% ( 2) 00:18:01.039 2.578 - 2.590: 98.3473% ( 1) 00:18:01.039 2.590 - 2.602: 98.3550% ( 1) 00:18:01.039 2.602 - 2.614: 9[2024-10-28 04:53:51.314442] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:01.039 8.3627% ( 1) 00:18:01.039 2.614 - 2.626: 98.3704% ( 1) 00:18:01.039 2.649 - 2.661: 98.4011% ( 4) 00:18:01.039 2.756 - 2.768: 98.4165% ( 2) 00:18:01.039 2.804 - 2.816: 98.4242% ( 1) 00:18:01.039 2.828 - 2.839: 98.4319% ( 1) 00:18:01.039 3.184 - 3.208: 98.4395% ( 1) 00:18:01.039 3.327 - 3.350: 98.4549% ( 2) 00:18:01.039 3.445 - 3.469: 98.4703% ( 2) 00:18:01.039 3.469 - 3.493: 98.4934% ( 3) 00:18:01.039 3.493 - 3.517: 98.5087% ( 2) 00:18:01.039 3.517 - 3.540: 98.5164% ( 1) 00:18:01.039 3.588 - 3.612: 98.5241% ( 1) 00:18:01.039 3.612 - 3.635: 98.5395% ( 2) 00:18:01.039 3.659 - 3.683: 98.5548% ( 2) 00:18:01.039 3.683 - 3.707: 98.5625% ( 1) 00:18:01.039 3.707 - 3.730: 98.5779% ( 2) 00:18:01.039 3.730 - 3.754: 98.6010% ( 3) 00:18:01.039 3.754 - 3.778: 98.6087% ( 1) 00:18:01.039 3.802 - 3.826: 98.6317% ( 3) 00:18:01.039 3.944 - 3.968: 98.6394% ( 1) 00:18:01.039 3.968 - 3.992: 98.6471% ( 1) 00:18:01.039 3.992 - 4.016: 98.6548% ( 1) 00:18:01.039 4.087 - 4.111: 98.6625% ( 1) 00:18:01.039 4.301 - 4.324: 98.6702% ( 1) 00:18:01.039 5.441 - 5.465: 98.6778% ( 1) 00:18:01.039 5.750 - 5.774: 98.6855% ( 1) 00:18:01.039 5.774 - 5.798: 98.6932% ( 1) 00:18:01.039 5.798 - 5.821: 98.7009% ( 1) 00:18:01.039 5.845 - 5.869: 98.7086% ( 1) 00:18:01.039 5.916 - 5.940: 98.7163% ( 1) 00:18:01.039 6.178 - 6.225: 98.7240% ( 1) 00:18:01.039 6.320 - 6.368: 98.7316% ( 1) 00:18:01.039 6.368 - 6.415: 98.7393% ( 1) 00:18:01.039 6.653 - 6.701: 98.7547% ( 2) 00:18:01.039 6.701 - 6.748: 98.7701% ( 2) 00:18:01.039 6.748 - 6.796: 98.7778% ( 1) 00:18:01.039 7.033 - 7.081: 98.7855% ( 1) 00:18:01.039 7.556 - 7.603: 98.7931% ( 1) 00:18:01.039 7.889 - 7.936: 98.8008% ( 1) 00:18:01.039 8.601 - 8.649: 98.8085% ( 1) 00:18:01.039 8.696 - 8.744: 98.8162% ( 1) 00:18:01.039 12.546 - 12.641: 98.8239% ( 1) 00:18:01.039 15.587 - 15.682: 98.8470% ( 3) 00:18:01.039 15.682 - 15.777: 98.8546% ( 1) 00:18:01.039 15.777 - 15.872: 98.8700% ( 2) 00:18:01.039 15.872 - 15.967: 98.9008% ( 4) 00:18:01.039 15.967 - 16.062: 98.9469% ( 6) 00:18:01.039 16.062 - 16.157: 98.9776% ( 4) 00:18:01.039 16.157 - 16.252: 99.0391% ( 8) 00:18:01.039 16.252 - 16.348: 99.0622% ( 3) 00:18:01.039 16.348 - 16.443: 99.0929% ( 4) 00:18:01.039 16.443 - 16.538: 99.1314% ( 5) 00:18:01.039 16.538 - 16.633: 99.1544% ( 3) 00:18:01.039 16.633 - 16.728: 99.1852% ( 4) 00:18:01.039 16.728 - 16.823: 99.2159% ( 4) 00:18:01.039 16.823 - 16.918: 99.2774% ( 8) 00:18:01.039 16.918 - 17.013: 99.2851% ( 1) 00:18:01.039 17.013 - 17.108: 99.2928% ( 1) 00:18:01.039 17.108 - 17.203: 99.3005% ( 1) 00:18:01.039 17.203 - 17.298: 99.3082% ( 1) 00:18:01.039 17.298 - 17.393: 99.3159% ( 1) 00:18:01.039 17.393 - 17.488: 99.3235% ( 1) 00:18:01.039 17.488 - 17.583: 99.3312% ( 1) 00:18:01.039 17.583 - 17.678: 99.3466% ( 2) 00:18:01.039 17.773 - 17.868: 99.3543% ( 1) 00:18:01.039 18.248 - 18.343: 99.3620% ( 1) 00:18:01.039 19.959 - 20.054: 99.3697% ( 1) 00:18:01.039 20.434 - 20.529: 99.3774% ( 1) 00:18:01.039 3990.311 - 4014.643: 99.8693% ( 64) 00:18:01.039 4014.643 - 4038.974: 99.9923% ( 16) 00:18:01.039 4160.630 - 4184.961: 100.0000% ( 1) 00:18:01.039 00:18:01.039 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:01.040 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:01.040 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:01.040 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:01.040 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:01.297 [ 00:18:01.297 { 00:18:01.297 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:01.297 "subtype": "Discovery", 00:18:01.297 "listen_addresses": [], 00:18:01.297 "allow_any_host": true, 00:18:01.297 "hosts": [] 00:18:01.297 }, 00:18:01.297 { 00:18:01.297 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:01.297 "subtype": "NVMe", 00:18:01.297 "listen_addresses": [ 00:18:01.297 { 00:18:01.297 "trtype": "VFIOUSER", 00:18:01.297 "adrfam": "IPv4", 00:18:01.297 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:01.297 "trsvcid": "0" 00:18:01.297 } 00:18:01.297 ], 00:18:01.297 "allow_any_host": true, 00:18:01.297 "hosts": [], 00:18:01.297 "serial_number": "SPDK1", 00:18:01.297 "model_number": "SPDK bdev Controller", 00:18:01.297 "max_namespaces": 32, 00:18:01.297 "min_cntlid": 1, 00:18:01.297 "max_cntlid": 65519, 00:18:01.297 "namespaces": [ 00:18:01.297 { 00:18:01.297 "nsid": 1, 00:18:01.297 "bdev_name": "Malloc1", 00:18:01.297 "name": "Malloc1", 00:18:01.297 "nguid": "62D8BED6AA63422F987AC4FF9A441846", 00:18:01.297 "uuid": "62d8bed6-aa63-422f-987a-c4ff9a441846" 00:18:01.297 }, 00:18:01.297 { 00:18:01.297 "nsid": 2, 00:18:01.297 "bdev_name": "Malloc3", 00:18:01.297 "name": "Malloc3", 00:18:01.297 "nguid": "D93CE9E648784D07AA08C72B6EC0DEB4", 00:18:01.297 "uuid": "d93ce9e6-4878-4d07-aa08-c72b6ec0deb4" 00:18:01.297 } 00:18:01.297 ] 00:18:01.297 }, 00:18:01.297 { 00:18:01.297 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:01.297 "subtype": "NVMe", 00:18:01.297 "listen_addresses": [ 00:18:01.297 { 00:18:01.297 "trtype": "VFIOUSER", 00:18:01.297 "adrfam": "IPv4", 00:18:01.297 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:01.297 "trsvcid": "0" 00:18:01.297 } 00:18:01.297 ], 00:18:01.297 "allow_any_host": true, 00:18:01.297 "hosts": [], 00:18:01.297 "serial_number": "SPDK2", 00:18:01.297 "model_number": "SPDK bdev Controller", 00:18:01.297 "max_namespaces": 32, 00:18:01.297 "min_cntlid": 1, 00:18:01.297 "max_cntlid": 65519, 00:18:01.297 "namespaces": [ 00:18:01.297 { 00:18:01.297 "nsid": 1, 00:18:01.297 "bdev_name": "Malloc2", 00:18:01.297 "name": "Malloc2", 00:18:01.297 "nguid": "EAB5EFD41303404F8DA9D8BDF789F3B9", 00:18:01.297 "uuid": "eab5efd4-1303-404f-8da9-d8bdf789f3b9" 00:18:01.297 } 00:18:01.297 ] 00:18:01.297 } 00:18:01.297 ] 00:18:01.297 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:01.297 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2311245 00:18:01.297 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:01.297 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:01.297 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:01.297 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:01.297 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:01.297 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:01.297 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:01.297 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:01.555 [2024-10-28 04:53:51.969991] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:01.555 Malloc4 00:18:01.555 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:01.813 [2024-10-28 04:53:52.261375] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:01.813 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:01.813 Asynchronous Event Request test 00:18:01.813 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:01.813 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:01.813 Registering asynchronous event callbacks... 00:18:01.813 Starting namespace attribute notice tests for all controllers... 00:18:01.813 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:01.813 aer_cb - Changed Namespace 00:18:01.813 Cleaning up... 00:18:02.072 [ 00:18:02.072 { 00:18:02.072 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:02.072 "subtype": "Discovery", 00:18:02.072 "listen_addresses": [], 00:18:02.072 "allow_any_host": true, 00:18:02.072 "hosts": [] 00:18:02.072 }, 00:18:02.072 { 00:18:02.072 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:02.072 "subtype": "NVMe", 00:18:02.072 "listen_addresses": [ 00:18:02.072 { 00:18:02.072 "trtype": "VFIOUSER", 00:18:02.072 "adrfam": "IPv4", 00:18:02.072 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:02.072 "trsvcid": "0" 00:18:02.072 } 00:18:02.072 ], 00:18:02.072 "allow_any_host": true, 00:18:02.072 "hosts": [], 00:18:02.072 "serial_number": "SPDK1", 00:18:02.072 "model_number": "SPDK bdev Controller", 00:18:02.072 "max_namespaces": 32, 00:18:02.072 "min_cntlid": 1, 00:18:02.072 "max_cntlid": 65519, 00:18:02.072 "namespaces": [ 00:18:02.072 { 00:18:02.072 "nsid": 1, 00:18:02.072 "bdev_name": "Malloc1", 00:18:02.072 "name": "Malloc1", 00:18:02.072 "nguid": "62D8BED6AA63422F987AC4FF9A441846", 00:18:02.072 "uuid": "62d8bed6-aa63-422f-987a-c4ff9a441846" 00:18:02.072 }, 00:18:02.072 { 00:18:02.072 "nsid": 2, 00:18:02.072 "bdev_name": "Malloc3", 00:18:02.072 "name": "Malloc3", 00:18:02.072 "nguid": "D93CE9E648784D07AA08C72B6EC0DEB4", 00:18:02.072 "uuid": "d93ce9e6-4878-4d07-aa08-c72b6ec0deb4" 00:18:02.072 } 00:18:02.072 ] 00:18:02.072 }, 00:18:02.072 { 00:18:02.072 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:02.072 "subtype": "NVMe", 00:18:02.072 "listen_addresses": [ 00:18:02.072 { 00:18:02.072 "trtype": "VFIOUSER", 00:18:02.072 "adrfam": "IPv4", 00:18:02.072 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:02.072 "trsvcid": "0" 00:18:02.072 } 00:18:02.072 ], 00:18:02.072 "allow_any_host": true, 00:18:02.072 "hosts": [], 00:18:02.072 "serial_number": "SPDK2", 00:18:02.072 "model_number": "SPDK bdev Controller", 00:18:02.072 "max_namespaces": 32, 00:18:02.072 "min_cntlid": 1, 00:18:02.072 "max_cntlid": 65519, 00:18:02.072 "namespaces": [ 00:18:02.072 { 00:18:02.072 "nsid": 1, 00:18:02.072 "bdev_name": "Malloc2", 00:18:02.072 "name": "Malloc2", 00:18:02.072 "nguid": "EAB5EFD41303404F8DA9D8BDF789F3B9", 00:18:02.072 "uuid": "eab5efd4-1303-404f-8da9-d8bdf789f3b9" 00:18:02.072 }, 00:18:02.072 { 00:18:02.072 "nsid": 2, 00:18:02.072 "bdev_name": "Malloc4", 00:18:02.072 "name": "Malloc4", 00:18:02.072 "nguid": "79449EFC1A3B4DEA8E2E63C2F30431B0", 00:18:02.072 "uuid": "79449efc-1a3b-4dea-8e2e-63c2f30431b0" 00:18:02.072 } 00:18:02.072 ] 00:18:02.072 } 00:18:02.072 ] 00:18:02.072 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2311245 00:18:02.072 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:02.072 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2305458 00:18:02.072 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2305458 ']' 00:18:02.072 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2305458 00:18:02.072 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:02.072 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:02.072 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2305458 00:18:02.072 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:02.072 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:02.072 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2305458' 00:18:02.072 killing process with pid 2305458 00:18:02.072 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2305458 00:18:02.072 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2305458 00:18:02.331 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:02.331 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:02.331 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:02.331 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:02.331 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:02.331 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2311421 00:18:02.331 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2311421' 00:18:02.331 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:02.331 Process pid: 2311421 00:18:02.331 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:02.331 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2311421 00:18:02.331 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2311421 ']' 00:18:02.331 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.331 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.331 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.331 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.331 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:02.590 [2024-10-28 04:53:52.938644] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:02.590 [2024-10-28 04:53:52.939711] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:18:02.590 [2024-10-28 04:53:52.939775] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.590 [2024-10-28 04:53:53.072585] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:02.590 [2024-10-28 04:53:53.114514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:02.590 [2024-10-28 04:53:53.162791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.590 [2024-10-28 04:53:53.162857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.590 [2024-10-28 04:53:53.162884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.590 [2024-10-28 04:53:53.162905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.590 [2024-10-28 04:53:53.162924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.590 [2024-10-28 04:53:53.164771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.590 [2024-10-28 04:53:53.164828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.590 [2024-10-28 04:53:53.164955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:02.590 [2024-10-28 04:53:53.164961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.849 [2024-10-28 04:53:53.262948] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:02.849 [2024-10-28 04:53:53.263153] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:02.849 [2024-10-28 04:53:53.263515] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:02.849 [2024-10-28 04:53:53.264239] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:02.849 [2024-10-28 04:53:53.264537] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:03.415 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:03.415 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:03.416 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:04.351 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:04.919 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:04.919 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:04.919 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:04.919 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:04.919 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:05.178 Malloc1 00:18:05.178 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:05.437 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:05.695 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:05.953 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:05.953 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:05.953 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:06.211 Malloc2 00:18:06.211 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:06.469 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:06.727 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:06.985 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:06.985 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2311421 00:18:06.985 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2311421 ']' 00:18:06.985 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2311421 00:18:06.985 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:06.985 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:06.985 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2311421 00:18:07.243 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:07.243 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:07.243 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2311421' 00:18:07.243 killing process with pid 2311421 00:18:07.243 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2311421 00:18:07.243 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2311421 00:18:07.502 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:07.502 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:07.502 00:18:07.502 real 0m56.612s 00:18:07.502 user 3m36.811s 00:18:07.502 sys 0m3.928s 00:18:07.502 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:07.502 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:07.502 ************************************ 00:18:07.502 END TEST nvmf_vfio_user 00:18:07.502 ************************************ 00:18:07.502 04:53:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:07.502 04:53:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:07.502 04:53:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:07.502 04:53:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:07.502 ************************************ 00:18:07.502 START TEST nvmf_vfio_user_nvme_compliance 00:18:07.502 ************************************ 00:18:07.502 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:07.502 * Looking for test storage... 00:18:07.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:07.502 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:18:07.503 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1689 -- # lcov --version 00:18:07.503 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:18:07.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.503 --rc genhtml_branch_coverage=1 00:18:07.503 --rc genhtml_function_coverage=1 00:18:07.503 --rc genhtml_legend=1 00:18:07.503 --rc geninfo_all_blocks=1 00:18:07.503 --rc geninfo_unexecuted_blocks=1 00:18:07.503 00:18:07.503 ' 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:18:07.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.503 --rc genhtml_branch_coverage=1 00:18:07.503 --rc genhtml_function_coverage=1 00:18:07.503 --rc genhtml_legend=1 00:18:07.503 --rc geninfo_all_blocks=1 00:18:07.503 --rc geninfo_unexecuted_blocks=1 00:18:07.503 00:18:07.503 ' 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:18:07.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.503 --rc genhtml_branch_coverage=1 00:18:07.503 --rc genhtml_function_coverage=1 00:18:07.503 --rc genhtml_legend=1 00:18:07.503 --rc geninfo_all_blocks=1 00:18:07.503 --rc geninfo_unexecuted_blocks=1 00:18:07.503 00:18:07.503 ' 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:18:07.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.503 --rc genhtml_branch_coverage=1 00:18:07.503 --rc genhtml_function_coverage=1 00:18:07.503 --rc genhtml_legend=1 00:18:07.503 --rc geninfo_all_blocks=1 00:18:07.503 --rc geninfo_unexecuted_blocks=1 00:18:07.503 00:18:07.503 ' 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:07.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:07.503 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2312025 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2312025' 00:18:07.504 Process pid: 2312025 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2312025 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2312025 ']' 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:07.504 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:07.762 [2024-10-28 04:53:58.140244] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:18:07.762 [2024-10-28 04:53:58.140332] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.762 [2024-10-28 04:53:58.274560] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:07.762 [2024-10-28 04:53:58.316652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:08.021 [2024-10-28 04:53:58.367722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.021 [2024-10-28 04:53:58.367783] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.021 [2024-10-28 04:53:58.367807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.021 [2024-10-28 04:53:58.367829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.021 [2024-10-28 04:53:58.367847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.021 [2024-10-28 04:53:58.369522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.021 [2024-10-28 04:53:58.369578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.021 [2024-10-28 04:53:58.369581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.588 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:08.588 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:18:08.588 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:09.967 malloc0 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.967 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:09.967 00:18:09.967 00:18:09.967 CUnit - A unit testing framework for C - Version 2.1-3 00:18:09.967 http://cunit.sourceforge.net/ 00:18:09.967 00:18:09.967 00:18:09.967 Suite: nvme_compliance 00:18:09.967 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-28 04:54:00.483111] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:09.967 [2024-10-28 04:54:00.484527] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:09.967 [2024-10-28 04:54:00.484551] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:09.967 [2024-10-28 04:54:00.484563] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:09.967 [2024-10-28 04:54:00.486119] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:09.967 passed 00:18:10.225 Test: admin_identify_ctrlr_verify_fused ...[2024-10-28 04:54:00.572521] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:10.226 [2024-10-28 04:54:00.575537] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:10.226 passed 00:18:10.226 Test: admin_identify_ns ...[2024-10-28 04:54:00.662207] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:10.226 [2024-10-28 04:54:00.721650] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:10.226 [2024-10-28 04:54:00.729653] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:10.226 [2024-10-28 04:54:00.750773] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:10.226 passed 00:18:10.483 Test: admin_get_features_mandatory_features ...[2024-10-28 04:54:00.834224] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:10.483 [2024-10-28 04:54:00.837240] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:10.483 passed 00:18:10.483 Test: admin_get_features_optional_features ...[2024-10-28 04:54:00.924595] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:10.483 [2024-10-28 04:54:00.927611] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:10.483 passed 00:18:10.483 Test: admin_set_features_number_of_queues ...[2024-10-28 04:54:01.010730] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:10.742 [2024-10-28 04:54:01.114730] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:10.742 passed 00:18:10.742 Test: admin_get_log_page_mandatory_logs ...[2024-10-28 04:54:01.198717] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:10.742 [2024-10-28 04:54:01.201738] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:10.742 passed 00:18:10.742 Test: admin_get_log_page_with_lpo ...[2024-10-28 04:54:01.284764] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:11.000 [2024-10-28 04:54:01.357654] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:11.000 [2024-10-28 04:54:01.370722] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:11.000 passed 00:18:11.000 Test: fabric_property_get ...[2024-10-28 04:54:01.450246] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:11.000 [2024-10-28 04:54:01.451515] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:11.000 [2024-10-28 04:54:01.453261] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:11.000 passed 00:18:11.000 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-28 04:54:01.539639] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:11.000 [2024-10-28 04:54:01.540951] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:11.000 [2024-10-28 04:54:01.542654] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:11.000 passed 00:18:11.257 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-28 04:54:01.624722] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:11.257 [2024-10-28 04:54:01.709642] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:11.257 [2024-10-28 04:54:01.725659] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:11.257 [2024-10-28 04:54:01.731746] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:11.257 passed 00:18:11.257 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-28 04:54:01.818134] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:11.257 [2024-10-28 04:54:01.819433] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:11.257 [2024-10-28 04:54:01.821153] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:11.515 passed 00:18:11.515 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-28 04:54:01.907506] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:11.515 [2024-10-28 04:54:01.983662] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:11.515 [2024-10-28 04:54:02.007660] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:11.515 [2024-10-28 04:54:02.013761] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:11.515 passed 00:18:11.515 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-28 04:54:02.098955] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:11.515 [2024-10-28 04:54:02.100238] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:11.515 [2024-10-28 04:54:02.100276] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:11.515 [2024-10-28 04:54:02.103968] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:11.773 passed 00:18:11.773 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-28 04:54:02.185146] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:11.773 [2024-10-28 04:54:02.278649] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:11.773 [2024-10-28 04:54:02.286649] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:11.773 [2024-10-28 04:54:02.294644] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:11.773 [2024-10-28 04:54:02.302650] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:11.773 [2024-10-28 04:54:02.332738] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:11.773 passed 00:18:12.031 Test: admin_create_io_sq_verify_pc ...[2024-10-28 04:54:02.414358] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:12.031 [2024-10-28 04:54:02.442674] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:12.031 [2024-10-28 04:54:02.461474] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:12.031 passed 00:18:12.031 Test: admin_create_io_qp_max_qps ...[2024-10-28 04:54:02.544851] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:13.405 [2024-10-28 04:54:03.647653] nvme_ctrlr.c:5487:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:13.663 [2024-10-28 04:54:04.073325] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:13.663 passed 00:18:13.663 Test: admin_create_io_sq_shared_cq ...[2024-10-28 04:54:04.159187] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:13.921 [2024-10-28 04:54:04.291643] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:13.921 [2024-10-28 04:54:04.328716] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:13.921 passed 00:18:13.921 00:18:13.921 Run Summary: Type Total Ran Passed Failed Inactive 00:18:13.921 suites 1 1 n/a 0 0 00:18:13.921 tests 18 18 18 0 0 00:18:13.921 asserts 360 360 360 0 n/a 00:18:13.921 00:18:13.921 Elapsed time = 1.598 seconds 00:18:13.921 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2312025 00:18:13.921 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2312025 ']' 00:18:13.921 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2312025 00:18:13.921 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:18:13.921 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:13.921 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2312025 00:18:13.921 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:13.921 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:13.921 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2312025' 00:18:13.921 killing process with pid 2312025 00:18:13.921 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2312025 00:18:13.921 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2312025 00:18:14.180 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:14.180 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:14.180 00:18:14.180 real 0m6.743s 00:18:14.180 user 0m18.963s 00:18:14.180 sys 0m0.585s 00:18:14.180 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:14.180 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:14.180 ************************************ 00:18:14.180 END TEST nvmf_vfio_user_nvme_compliance 00:18:14.180 ************************************ 00:18:14.180 04:54:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:14.180 04:54:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:14.180 04:54:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:14.180 04:54:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:14.180 ************************************ 00:18:14.180 START TEST nvmf_vfio_user_fuzz 00:18:14.180 ************************************ 00:18:14.180 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:14.180 * Looking for test storage... 00:18:14.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:14.180 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:18:14.180 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1689 -- # lcov --version 00:18:14.180 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:14.440 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:18:14.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.441 --rc genhtml_branch_coverage=1 00:18:14.441 --rc genhtml_function_coverage=1 00:18:14.441 --rc genhtml_legend=1 00:18:14.441 --rc geninfo_all_blocks=1 00:18:14.441 --rc geninfo_unexecuted_blocks=1 00:18:14.441 00:18:14.441 ' 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:18:14.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.441 --rc genhtml_branch_coverage=1 00:18:14.441 --rc genhtml_function_coverage=1 00:18:14.441 --rc genhtml_legend=1 00:18:14.441 --rc geninfo_all_blocks=1 00:18:14.441 --rc geninfo_unexecuted_blocks=1 00:18:14.441 00:18:14.441 ' 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:18:14.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.441 --rc genhtml_branch_coverage=1 00:18:14.441 --rc genhtml_function_coverage=1 00:18:14.441 --rc genhtml_legend=1 00:18:14.441 --rc geninfo_all_blocks=1 00:18:14.441 --rc geninfo_unexecuted_blocks=1 00:18:14.441 00:18:14.441 ' 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:18:14.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.441 --rc genhtml_branch_coverage=1 00:18:14.441 --rc genhtml_function_coverage=1 00:18:14.441 --rc genhtml_legend=1 00:18:14.441 --rc geninfo_all_blocks=1 00:18:14.441 --rc geninfo_unexecuted_blocks=1 00:18:14.441 00:18:14.441 ' 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:14.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2312976 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:14.441 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2312976' 00:18:14.442 Process pid: 2312976 00:18:14.442 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:14.442 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2312976 00:18:14.442 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2312976 ']' 00:18:14.442 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.442 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:14.442 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.442 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:14.442 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:14.700 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:14.700 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:18:14.700 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:16.072 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:16.072 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.072 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:16.073 malloc0 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:16.073 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:48.195 Fuzzing completed. Shutting down the fuzz application 00:18:48.195 00:18:48.195 Dumping successful admin opcodes: 00:18:48.195 8, 9, 10, 24, 00:18:48.195 Dumping successful io opcodes: 00:18:48.195 0, 00:18:48.195 NS: 0x20000081ef00 I/O qp, Total commands completed: 574829, total successful commands: 2215, random_seed: 82856832 00:18:48.195 NS: 0x20000081ef00 admin qp, Total commands completed: 73392, total successful commands: 579, random_seed: 2468709440 00:18:48.195 04:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:48.195 04:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.195 04:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:48.195 04:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.195 04:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2312976 00:18:48.195 04:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2312976 ']' 00:18:48.195 04:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2312976 00:18:48.195 04:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:18:48.195 04:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:48.195 04:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2312976 00:18:48.195 04:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:48.195 04:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:48.195 04:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2312976' 00:18:48.195 killing process with pid 2312976 00:18:48.195 04:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2312976 00:18:48.195 04:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2312976 00:18:48.195 04:54:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:48.195 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:48.195 00:18:48.195 real 0m32.330s 00:18:48.195 user 0m31.689s 00:18:48.195 sys 0m28.498s 00:18:48.195 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:48.195 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:48.195 ************************************ 00:18:48.195 END TEST nvmf_vfio_user_fuzz 00:18:48.195 ************************************ 00:18:48.195 04:54:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:48.195 04:54:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:48.195 04:54:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:48.195 04:54:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:48.195 ************************************ 00:18:48.195 START TEST nvmf_auth_target 00:18:48.195 ************************************ 00:18:48.195 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:48.195 * Looking for test storage... 00:18:48.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:48.195 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:18:48.195 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1689 -- # lcov --version 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:18:48.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.196 --rc genhtml_branch_coverage=1 00:18:48.196 --rc genhtml_function_coverage=1 00:18:48.196 --rc genhtml_legend=1 00:18:48.196 --rc geninfo_all_blocks=1 00:18:48.196 --rc geninfo_unexecuted_blocks=1 00:18:48.196 00:18:48.196 ' 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:18:48.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.196 --rc genhtml_branch_coverage=1 00:18:48.196 --rc genhtml_function_coverage=1 00:18:48.196 --rc genhtml_legend=1 00:18:48.196 --rc geninfo_all_blocks=1 00:18:48.196 --rc geninfo_unexecuted_blocks=1 00:18:48.196 00:18:48.196 ' 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:18:48.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.196 --rc genhtml_branch_coverage=1 00:18:48.196 --rc genhtml_function_coverage=1 00:18:48.196 --rc genhtml_legend=1 00:18:48.196 --rc geninfo_all_blocks=1 00:18:48.196 --rc geninfo_unexecuted_blocks=1 00:18:48.196 00:18:48.196 ' 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:18:48.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.196 --rc genhtml_branch_coverage=1 00:18:48.196 --rc genhtml_function_coverage=1 00:18:48.196 --rc genhtml_legend=1 00:18:48.196 --rc geninfo_all_blocks=1 00:18:48.196 --rc geninfo_unexecuted_blocks=1 00:18:48.196 00:18:48.196 ' 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:48.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:48.196 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:48.197 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:48.197 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:48.197 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.197 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:48.197 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:48.197 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:48.197 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.197 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.197 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.197 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:48.197 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:48.197 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:48.197 04:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:48.763 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:48.763 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:48.763 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.763 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:48.764 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:48.764 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:49.022 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:49.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:18:49.023 00:18:49.023 --- 10.0.0.2 ping statistics --- 00:18:49.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.023 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:49.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:18:49.023 00:18:49.023 --- 10.0.0.1 ping statistics --- 00:18:49.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.023 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=2318824 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 2318824 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2318824 ']' 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.023 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2318850 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=ec4138eef266132c803685d142f25fb70842f375bc6014e2 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Idp 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key ec4138eef266132c803685d142f25fb70842f375bc6014e2 0 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 ec4138eef266132c803685d142f25fb70842f375bc6014e2 0 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=ec4138eef266132c803685d142f25fb70842f375bc6014e2 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:18:49.303 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Idp 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Idp 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Idp 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=31a74e46ed719c9b7e64cd4f42062e7d94e427714feffffd1bf98fbcb7ea9cdc 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.1lC 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 31a74e46ed719c9b7e64cd4f42062e7d94e427714feffffd1bf98fbcb7ea9cdc 3 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 31a74e46ed719c9b7e64cd4f42062e7d94e427714feffffd1bf98fbcb7ea9cdc 3 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=31a74e46ed719c9b7e64cd4f42062e7d94e427714feffffd1bf98fbcb7ea9cdc 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.1lC 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.1lC 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.1lC 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=53dd1ae216682c240023a398a21bc421 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.pQB 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 53dd1ae216682c240023a398a21bc421 1 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 53dd1ae216682c240023a398a21bc421 1 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=53dd1ae216682c240023a398a21bc421 00:18:49.561 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:18:49.562 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.pQB 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.pQB 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.pQB 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=f31093225d31ff2ef4301f50e48e23d3b5b34b0318ee8499 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.StY 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key f31093225d31ff2ef4301f50e48e23d3b5b34b0318ee8499 2 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 f31093225d31ff2ef4301f50e48e23d3b5b34b0318ee8499 2 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=f31093225d31ff2ef4301f50e48e23d3b5b34b0318ee8499 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.StY 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.StY 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.StY 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=7552b74df798412abea724e7b6ffec3470a80c3ee33c6725 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.cMf 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 7552b74df798412abea724e7b6ffec3470a80c3ee33c6725 2 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 7552b74df798412abea724e7b6ffec3470a80c3ee33c6725 2 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=7552b74df798412abea724e7b6ffec3470a80c3ee33c6725 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.cMf 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.cMf 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.cMf 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=ee0553cc09b4708070bd433251cd6901 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.iR5 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key ee0553cc09b4708070bd433251cd6901 1 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 ee0553cc09b4708070bd433251cd6901 1 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=ee0553cc09b4708070bd433251cd6901 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:18:49.562 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.iR5 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.iR5 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.iR5 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=f10079ef0c0b832140fe9db69faed769d9efee1b7605b0160d29bd43e2376c73 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.cMc 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key f10079ef0c0b832140fe9db69faed769d9efee1b7605b0160d29bd43e2376c73 3 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 f10079ef0c0b832140fe9db69faed769d9efee1b7605b0160d29bd43e2376c73 3 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=f10079ef0c0b832140fe9db69faed769d9efee1b7605b0160d29bd43e2376c73 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.cMc 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.cMc 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.cMc 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2318824 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2318824 ']' 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.820 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.078 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.078 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:50.078 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2318850 /var/tmp/host.sock 00:18:50.078 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2318850 ']' 00:18:50.078 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:50.078 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:50.078 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:50.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:50.078 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:50.078 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.336 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.336 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:50.336 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:50.336 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.336 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.336 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.336 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:50.336 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Idp 00:18:50.336 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.336 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.336 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.336 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Idp 00:18:50.336 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Idp 00:18:50.593 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.1lC ]] 00:18:50.593 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1lC 00:18:50.593 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.593 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.593 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.593 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1lC 00:18:50.593 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1lC 00:18:50.850 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:50.850 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.pQB 00:18:50.850 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.850 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.850 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.850 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.pQB 00:18:50.850 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.pQB 00:18:51.106 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.StY ]] 00:18:51.106 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.StY 00:18:51.106 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.106 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.106 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.107 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.StY 00:18:51.107 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.StY 00:18:51.363 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:51.363 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.cMf 00:18:51.363 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.363 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.363 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.363 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.cMf 00:18:51.363 04:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.cMf 00:18:51.927 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.iR5 ]] 00:18:51.927 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iR5 00:18:51.927 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.927 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.927 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.927 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iR5 00:18:51.927 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iR5 00:18:51.927 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:51.927 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.cMc 00:18:51.927 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.927 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.184 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.184 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.cMc 00:18:52.184 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.cMc 00:18:52.441 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:52.441 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:52.441 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.441 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.441 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:52.441 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:52.699 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:52.699 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.699 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:52.699 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:52.699 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:52.699 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.699 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.699 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.699 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.699 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.699 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.699 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.699 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.957 00:18:52.957 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.957 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.957 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.215 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.215 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.215 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.215 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.215 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.215 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.215 { 00:18:53.215 "cntlid": 1, 00:18:53.215 "qid": 0, 00:18:53.215 "state": "enabled", 00:18:53.215 "thread": "nvmf_tgt_poll_group_000", 00:18:53.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:53.215 "listen_address": { 00:18:53.215 "trtype": "TCP", 00:18:53.215 "adrfam": "IPv4", 00:18:53.215 "traddr": "10.0.0.2", 00:18:53.215 "trsvcid": "4420" 00:18:53.215 }, 00:18:53.215 "peer_address": { 00:18:53.215 "trtype": "TCP", 00:18:53.215 "adrfam": "IPv4", 00:18:53.215 "traddr": "10.0.0.1", 00:18:53.215 "trsvcid": "55490" 00:18:53.215 }, 00:18:53.215 "auth": { 00:18:53.215 "state": "completed", 00:18:53.215 "digest": "sha256", 00:18:53.215 "dhgroup": "null" 00:18:53.215 } 00:18:53.215 } 00:18:53.215 ]' 00:18:53.215 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.215 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.215 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.215 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:53.215 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.472 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.472 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.472 04:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.729 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:18:53.729 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:18:54.663 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.663 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.663 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.663 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.663 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.663 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.663 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:54.663 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:54.921 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:54.921 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.921 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:54.921 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:54.921 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:54.921 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.921 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.921 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.921 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.921 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.921 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.921 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.921 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.179 00:18:55.179 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.179 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.179 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.437 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.437 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.437 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.437 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.437 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.437 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.437 { 00:18:55.437 "cntlid": 3, 00:18:55.437 "qid": 0, 00:18:55.437 "state": "enabled", 00:18:55.437 "thread": "nvmf_tgt_poll_group_000", 00:18:55.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:55.437 "listen_address": { 00:18:55.437 "trtype": "TCP", 00:18:55.437 "adrfam": "IPv4", 00:18:55.437 "traddr": "10.0.0.2", 00:18:55.437 "trsvcid": "4420" 00:18:55.437 }, 00:18:55.437 "peer_address": { 00:18:55.437 "trtype": "TCP", 00:18:55.437 "adrfam": "IPv4", 00:18:55.437 "traddr": "10.0.0.1", 00:18:55.437 "trsvcid": "55504" 00:18:55.437 }, 00:18:55.437 "auth": { 00:18:55.437 "state": "completed", 00:18:55.437 "digest": "sha256", 00:18:55.437 "dhgroup": "null" 00:18:55.437 } 00:18:55.437 } 00:18:55.437 ]' 00:18:55.437 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.437 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.437 04:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.437 04:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:55.437 04:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.695 04:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.695 04:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.695 04:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.952 04:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:18:55.953 04:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:18:56.886 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.886 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:56.886 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.886 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.886 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.886 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.886 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:56.886 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:57.144 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:57.144 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.145 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:57.145 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:57.145 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:57.145 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.145 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.145 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.145 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.145 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.145 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.145 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.145 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.403 00:18:57.403 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.403 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.403 04:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.661 04:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.661 04:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.661 04:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.661 04:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.661 04:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.661 04:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.661 { 00:18:57.661 "cntlid": 5, 00:18:57.661 "qid": 0, 00:18:57.661 "state": "enabled", 00:18:57.661 "thread": "nvmf_tgt_poll_group_000", 00:18:57.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:57.661 "listen_address": { 00:18:57.661 "trtype": "TCP", 00:18:57.661 "adrfam": "IPv4", 00:18:57.661 "traddr": "10.0.0.2", 00:18:57.661 "trsvcid": "4420" 00:18:57.661 }, 00:18:57.661 "peer_address": { 00:18:57.661 "trtype": "TCP", 00:18:57.661 "adrfam": "IPv4", 00:18:57.661 "traddr": "10.0.0.1", 00:18:57.661 "trsvcid": "55538" 00:18:57.661 }, 00:18:57.661 "auth": { 00:18:57.661 "state": "completed", 00:18:57.661 "digest": "sha256", 00:18:57.661 "dhgroup": "null" 00:18:57.661 } 00:18:57.661 } 00:18:57.661 ]' 00:18:57.661 04:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.661 04:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.661 04:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.918 04:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:57.918 04:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.919 04:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.919 04:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.919 04:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.176 04:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:18:58.176 04:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:18:59.111 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.111 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.111 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.111 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.111 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.111 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.111 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:59.111 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:59.369 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:59.369 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.369 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:59.369 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:59.369 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:59.369 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.369 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:59.369 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.369 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.369 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.369 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:59.369 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:59.369 04:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:59.627 00:18:59.627 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.627 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.627 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.885 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.885 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.885 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.885 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.885 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.885 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.885 { 00:18:59.885 "cntlid": 7, 00:18:59.885 "qid": 0, 00:18:59.885 "state": "enabled", 00:18:59.885 "thread": "nvmf_tgt_poll_group_000", 00:18:59.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:59.885 "listen_address": { 00:18:59.885 "trtype": "TCP", 00:18:59.885 "adrfam": "IPv4", 00:18:59.885 "traddr": "10.0.0.2", 00:18:59.885 "trsvcid": "4420" 00:18:59.885 }, 00:18:59.885 "peer_address": { 00:18:59.885 "trtype": "TCP", 00:18:59.885 "adrfam": "IPv4", 00:18:59.885 "traddr": "10.0.0.1", 00:18:59.885 "trsvcid": "38318" 00:18:59.885 }, 00:18:59.885 "auth": { 00:18:59.885 "state": "completed", 00:18:59.885 "digest": "sha256", 00:18:59.885 "dhgroup": "null" 00:18:59.885 } 00:18:59.885 } 00:18:59.885 ]' 00:18:59.885 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.143 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.143 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.143 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:00.143 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.143 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.143 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.143 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.401 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:19:00.401 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:19:01.334 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.334 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.334 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.334 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.334 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.334 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.334 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.334 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:01.334 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:01.592 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:01.592 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.592 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:01.592 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:01.592 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:01.592 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.592 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.592 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.592 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.592 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.592 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.592 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.592 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.850 00:19:01.850 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.851 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.851 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.109 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.109 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.109 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.109 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.110 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.110 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.110 { 00:19:02.110 "cntlid": 9, 00:19:02.110 "qid": 0, 00:19:02.110 "state": "enabled", 00:19:02.110 "thread": "nvmf_tgt_poll_group_000", 00:19:02.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:02.110 "listen_address": { 00:19:02.110 "trtype": "TCP", 00:19:02.110 "adrfam": "IPv4", 00:19:02.110 "traddr": "10.0.0.2", 00:19:02.110 "trsvcid": "4420" 00:19:02.110 }, 00:19:02.110 "peer_address": { 00:19:02.110 "trtype": "TCP", 00:19:02.110 "adrfam": "IPv4", 00:19:02.110 "traddr": "10.0.0.1", 00:19:02.110 "trsvcid": "38358" 00:19:02.110 }, 00:19:02.110 "auth": { 00:19:02.110 "state": "completed", 00:19:02.110 "digest": "sha256", 00:19:02.110 "dhgroup": "ffdhe2048" 00:19:02.110 } 00:19:02.110 } 00:19:02.110 ]' 00:19:02.110 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.368 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.368 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.368 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.368 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.368 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.368 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.368 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.625 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:19:02.625 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:19:03.559 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.559 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.559 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.559 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.559 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.559 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.559 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:03.559 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:03.817 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:03.817 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.817 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:03.817 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:03.817 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:03.817 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.817 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.817 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.817 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.817 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.817 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.817 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.817 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.075 00:19:04.333 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.333 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.333 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.591 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.591 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.591 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.591 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.591 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.591 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.591 { 00:19:04.591 "cntlid": 11, 00:19:04.591 "qid": 0, 00:19:04.591 "state": "enabled", 00:19:04.591 "thread": "nvmf_tgt_poll_group_000", 00:19:04.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:04.591 "listen_address": { 00:19:04.591 "trtype": "TCP", 00:19:04.591 "adrfam": "IPv4", 00:19:04.591 "traddr": "10.0.0.2", 00:19:04.591 "trsvcid": "4420" 00:19:04.591 }, 00:19:04.591 "peer_address": { 00:19:04.591 "trtype": "TCP", 00:19:04.591 "adrfam": "IPv4", 00:19:04.591 "traddr": "10.0.0.1", 00:19:04.591 "trsvcid": "38386" 00:19:04.591 }, 00:19:04.591 "auth": { 00:19:04.591 "state": "completed", 00:19:04.591 "digest": "sha256", 00:19:04.591 "dhgroup": "ffdhe2048" 00:19:04.591 } 00:19:04.591 } 00:19:04.591 ]' 00:19:04.591 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.591 04:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.591 04:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.591 04:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:04.591 04:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.591 04:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.591 04:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.591 04:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.848 04:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:19:04.848 04:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:19:05.781 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.781 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.781 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.781 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.781 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.781 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.781 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:05.781 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:06.039 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:06.039 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.039 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:06.039 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:06.039 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:06.039 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.039 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.039 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.039 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.039 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.039 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.039 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.039 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.605 00:19:06.605 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.605 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.605 04:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.862 04:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.863 04:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.863 04:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.863 04:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.863 04:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.863 04:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.863 { 00:19:06.863 "cntlid": 13, 00:19:06.863 "qid": 0, 00:19:06.863 "state": "enabled", 00:19:06.863 "thread": "nvmf_tgt_poll_group_000", 00:19:06.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:06.863 "listen_address": { 00:19:06.863 "trtype": "TCP", 00:19:06.863 "adrfam": "IPv4", 00:19:06.863 "traddr": "10.0.0.2", 00:19:06.863 "trsvcid": "4420" 00:19:06.863 }, 00:19:06.863 "peer_address": { 00:19:06.863 "trtype": "TCP", 00:19:06.863 "adrfam": "IPv4", 00:19:06.863 "traddr": "10.0.0.1", 00:19:06.863 "trsvcid": "38412" 00:19:06.863 }, 00:19:06.863 "auth": { 00:19:06.863 "state": "completed", 00:19:06.863 "digest": "sha256", 00:19:06.863 "dhgroup": "ffdhe2048" 00:19:06.863 } 00:19:06.863 } 00:19:06.863 ]' 00:19:06.863 04:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.863 04:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.863 04:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.863 04:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:06.863 04:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.863 04:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.863 04:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.863 04:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.121 04:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:19:07.121 04:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:19:08.055 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.055 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.055 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.055 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.055 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.055 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.055 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.055 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.313 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:08.313 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.313 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:08.313 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:08.313 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:08.313 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.313 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:08.313 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.314 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.314 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.314 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:08.314 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.314 04:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.879 00:19:08.879 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.879 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.879 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.137 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.137 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.137 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.137 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.137 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.137 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.137 { 00:19:09.137 "cntlid": 15, 00:19:09.137 "qid": 0, 00:19:09.137 "state": "enabled", 00:19:09.137 "thread": "nvmf_tgt_poll_group_000", 00:19:09.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:09.137 "listen_address": { 00:19:09.137 "trtype": "TCP", 00:19:09.137 "adrfam": "IPv4", 00:19:09.137 "traddr": "10.0.0.2", 00:19:09.137 "trsvcid": "4420" 00:19:09.137 }, 00:19:09.137 "peer_address": { 00:19:09.137 "trtype": "TCP", 00:19:09.137 "adrfam": "IPv4", 00:19:09.137 "traddr": "10.0.0.1", 00:19:09.137 "trsvcid": "51316" 00:19:09.137 }, 00:19:09.137 "auth": { 00:19:09.137 "state": "completed", 00:19:09.137 "digest": "sha256", 00:19:09.137 "dhgroup": "ffdhe2048" 00:19:09.137 } 00:19:09.137 } 00:19:09.137 ]' 00:19:09.137 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.137 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.137 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.137 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:09.137 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.137 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.137 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.137 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.395 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:19:09.395 04:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:19:10.329 04:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.329 04:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.329 04:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.329 04:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.329 04:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.329 04:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.329 04:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.329 04:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:10.329 04:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:10.587 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:10.587 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.587 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:10.587 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:10.587 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:10.587 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.587 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.587 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.587 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.587 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.587 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.587 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.587 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.152 00:19:11.153 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.153 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.153 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.411 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.411 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.411 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.411 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.411 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.411 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.411 { 00:19:11.411 "cntlid": 17, 00:19:11.411 "qid": 0, 00:19:11.411 "state": "enabled", 00:19:11.411 "thread": "nvmf_tgt_poll_group_000", 00:19:11.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:11.411 "listen_address": { 00:19:11.411 "trtype": "TCP", 00:19:11.411 "adrfam": "IPv4", 00:19:11.411 "traddr": "10.0.0.2", 00:19:11.411 "trsvcid": "4420" 00:19:11.411 }, 00:19:11.411 "peer_address": { 00:19:11.411 "trtype": "TCP", 00:19:11.411 "adrfam": "IPv4", 00:19:11.411 "traddr": "10.0.0.1", 00:19:11.411 "trsvcid": "51356" 00:19:11.411 }, 00:19:11.411 "auth": { 00:19:11.411 "state": "completed", 00:19:11.411 "digest": "sha256", 00:19:11.411 "dhgroup": "ffdhe3072" 00:19:11.411 } 00:19:11.411 } 00:19:11.411 ]' 00:19:11.411 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.411 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.411 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.411 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:11.411 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.411 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.411 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.411 04:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.670 04:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:19:11.670 04:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:19:12.604 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.604 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.604 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.604 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.604 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.604 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.604 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:12.604 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:12.862 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:12.862 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.862 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.862 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:12.862 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:12.862 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.862 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.862 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.862 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.862 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.862 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.862 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.862 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.428 00:19:13.428 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.428 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.428 04:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.684 04:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.684 04:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.684 04:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.684 04:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.684 04:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.684 04:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.684 { 00:19:13.684 "cntlid": 19, 00:19:13.684 "qid": 0, 00:19:13.684 "state": "enabled", 00:19:13.684 "thread": "nvmf_tgt_poll_group_000", 00:19:13.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:13.684 "listen_address": { 00:19:13.684 "trtype": "TCP", 00:19:13.684 "adrfam": "IPv4", 00:19:13.684 "traddr": "10.0.0.2", 00:19:13.684 "trsvcid": "4420" 00:19:13.684 }, 00:19:13.684 "peer_address": { 00:19:13.684 "trtype": "TCP", 00:19:13.684 "adrfam": "IPv4", 00:19:13.684 "traddr": "10.0.0.1", 00:19:13.684 "trsvcid": "51398" 00:19:13.684 }, 00:19:13.684 "auth": { 00:19:13.684 "state": "completed", 00:19:13.684 "digest": "sha256", 00:19:13.684 "dhgroup": "ffdhe3072" 00:19:13.684 } 00:19:13.684 } 00:19:13.684 ]' 00:19:13.684 04:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.684 04:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.684 04:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.684 04:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:13.684 04:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.684 04:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.684 04:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.684 04:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.942 04:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:19:13.942 04:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:19:14.874 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.874 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.874 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.874 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.874 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.874 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.874 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.874 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.132 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:15.132 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.132 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:15.132 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:15.132 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:15.132 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.132 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.132 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.132 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.132 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.132 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.132 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.133 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.390 00:19:15.390 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.390 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.390 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.648 04:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.648 04:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.648 04:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.648 04:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.648 04:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.649 04:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.649 { 00:19:15.649 "cntlid": 21, 00:19:15.649 "qid": 0, 00:19:15.649 "state": "enabled", 00:19:15.649 "thread": "nvmf_tgt_poll_group_000", 00:19:15.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:15.649 "listen_address": { 00:19:15.649 "trtype": "TCP", 00:19:15.649 "adrfam": "IPv4", 00:19:15.649 "traddr": "10.0.0.2", 00:19:15.649 "trsvcid": "4420" 00:19:15.649 }, 00:19:15.649 "peer_address": { 00:19:15.649 "trtype": "TCP", 00:19:15.649 "adrfam": "IPv4", 00:19:15.649 "traddr": "10.0.0.1", 00:19:15.649 "trsvcid": "51440" 00:19:15.649 }, 00:19:15.649 "auth": { 00:19:15.649 "state": "completed", 00:19:15.649 "digest": "sha256", 00:19:15.649 "dhgroup": "ffdhe3072" 00:19:15.649 } 00:19:15.649 } 00:19:15.649 ]' 00:19:15.649 04:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.906 04:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.906 04:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.906 04:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:15.906 04:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.906 04:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.906 04:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.907 04:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.164 04:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:19:16.164 04:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:19:17.098 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.098 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.098 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.098 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.098 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.098 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.098 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:17.098 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:17.356 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:17.356 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.356 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:17.356 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:17.356 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:17.356 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.356 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:17.356 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.356 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.356 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.356 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:17.356 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:17.356 04:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:17.954 00:19:17.954 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.954 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.954 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.954 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.954 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.954 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.954 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.954 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.954 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.954 { 00:19:17.954 "cntlid": 23, 00:19:17.954 "qid": 0, 00:19:17.954 "state": "enabled", 00:19:17.954 "thread": "nvmf_tgt_poll_group_000", 00:19:17.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:17.954 "listen_address": { 00:19:17.954 "trtype": "TCP", 00:19:17.954 "adrfam": "IPv4", 00:19:17.954 "traddr": "10.0.0.2", 00:19:17.954 "trsvcid": "4420" 00:19:17.954 }, 00:19:17.954 "peer_address": { 00:19:17.954 "trtype": "TCP", 00:19:17.954 "adrfam": "IPv4", 00:19:17.954 "traddr": "10.0.0.1", 00:19:17.954 "trsvcid": "51462" 00:19:17.954 }, 00:19:17.954 "auth": { 00:19:17.954 "state": "completed", 00:19:17.954 "digest": "sha256", 00:19:17.954 "dhgroup": "ffdhe3072" 00:19:17.954 } 00:19:17.954 } 00:19:17.954 ]' 00:19:18.281 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.281 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.281 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.281 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.281 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.281 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.281 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.281 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.540 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:19:18.540 04:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:19:19.474 04:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.474 04:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.474 04:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.474 04:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.474 04:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.474 04:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.474 04:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.474 04:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:19.474 04:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:19.732 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:19.732 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.732 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:19.732 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:19.732 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:19.732 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.732 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.732 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.732 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.732 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.732 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.732 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.732 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.989 00:19:20.246 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.246 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.246 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.503 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.503 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.503 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.503 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.503 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.503 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.503 { 00:19:20.503 "cntlid": 25, 00:19:20.503 "qid": 0, 00:19:20.503 "state": "enabled", 00:19:20.503 "thread": "nvmf_tgt_poll_group_000", 00:19:20.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:20.503 "listen_address": { 00:19:20.503 "trtype": "TCP", 00:19:20.503 "adrfam": "IPv4", 00:19:20.503 "traddr": "10.0.0.2", 00:19:20.503 "trsvcid": "4420" 00:19:20.503 }, 00:19:20.503 "peer_address": { 00:19:20.503 "trtype": "TCP", 00:19:20.503 "adrfam": "IPv4", 00:19:20.503 "traddr": "10.0.0.1", 00:19:20.503 "trsvcid": "35390" 00:19:20.503 }, 00:19:20.503 "auth": { 00:19:20.503 "state": "completed", 00:19:20.503 "digest": "sha256", 00:19:20.503 "dhgroup": "ffdhe4096" 00:19:20.503 } 00:19:20.503 } 00:19:20.503 ]' 00:19:20.503 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.503 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.503 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.504 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:20.504 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.504 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.504 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.504 04:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.761 04:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:19:20.761 04:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:19:21.695 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.696 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.696 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.696 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.696 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.696 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.696 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:21.696 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:21.954 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:21.954 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.954 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:21.954 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:21.954 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:21.954 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.954 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.954 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.954 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.954 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.954 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.954 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.954 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.520 00:19:22.520 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.520 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.520 04:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.778 04:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.778 04:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.778 04:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.778 04:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.778 04:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.778 04:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.778 { 00:19:22.778 "cntlid": 27, 00:19:22.778 "qid": 0, 00:19:22.778 "state": "enabled", 00:19:22.778 "thread": "nvmf_tgt_poll_group_000", 00:19:22.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:22.778 "listen_address": { 00:19:22.778 "trtype": "TCP", 00:19:22.778 "adrfam": "IPv4", 00:19:22.778 "traddr": "10.0.0.2", 00:19:22.778 "trsvcid": "4420" 00:19:22.778 }, 00:19:22.778 "peer_address": { 00:19:22.778 "trtype": "TCP", 00:19:22.778 "adrfam": "IPv4", 00:19:22.778 "traddr": "10.0.0.1", 00:19:22.778 "trsvcid": "35414" 00:19:22.778 }, 00:19:22.778 "auth": { 00:19:22.778 "state": "completed", 00:19:22.778 "digest": "sha256", 00:19:22.778 "dhgroup": "ffdhe4096" 00:19:22.778 } 00:19:22.778 } 00:19:22.778 ]' 00:19:22.778 04:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.778 04:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.778 04:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.778 04:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.778 04:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.778 04:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.778 04:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.778 04:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.035 04:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:19:23.035 04:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:19:23.969 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.969 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.969 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.969 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.969 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.969 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.969 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:23.969 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.227 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:24.227 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.227 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:24.227 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:24.227 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:24.227 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.227 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.227 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.227 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.227 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.227 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.227 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.227 04:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.793 00:19:24.793 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.793 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.793 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.051 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.051 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.051 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.051 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.052 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.052 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.052 { 00:19:25.052 "cntlid": 29, 00:19:25.052 "qid": 0, 00:19:25.052 "state": "enabled", 00:19:25.052 "thread": "nvmf_tgt_poll_group_000", 00:19:25.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:25.052 "listen_address": { 00:19:25.052 "trtype": "TCP", 00:19:25.052 "adrfam": "IPv4", 00:19:25.052 "traddr": "10.0.0.2", 00:19:25.052 "trsvcid": "4420" 00:19:25.052 }, 00:19:25.052 "peer_address": { 00:19:25.052 "trtype": "TCP", 00:19:25.052 "adrfam": "IPv4", 00:19:25.052 "traddr": "10.0.0.1", 00:19:25.052 "trsvcid": "35434" 00:19:25.052 }, 00:19:25.052 "auth": { 00:19:25.052 "state": "completed", 00:19:25.052 "digest": "sha256", 00:19:25.052 "dhgroup": "ffdhe4096" 00:19:25.052 } 00:19:25.052 } 00:19:25.052 ]' 00:19:25.052 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.052 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.052 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.052 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.052 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.052 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.052 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.052 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.310 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:19:25.310 04:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:19:26.684 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.684 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.684 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.684 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.684 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.684 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.684 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:26.684 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:26.684 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:26.684 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.684 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:26.684 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:26.684 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:26.684 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.684 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:26.684 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.684 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.684 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.684 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:26.684 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.684 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.250 00:19:27.250 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.250 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.250 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.507 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.507 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.507 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.507 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.507 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.507 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.507 { 00:19:27.507 "cntlid": 31, 00:19:27.507 "qid": 0, 00:19:27.507 "state": "enabled", 00:19:27.507 "thread": "nvmf_tgt_poll_group_000", 00:19:27.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:27.507 "listen_address": { 00:19:27.507 "trtype": "TCP", 00:19:27.507 "adrfam": "IPv4", 00:19:27.507 "traddr": "10.0.0.2", 00:19:27.507 "trsvcid": "4420" 00:19:27.507 }, 00:19:27.507 "peer_address": { 00:19:27.507 "trtype": "TCP", 00:19:27.507 "adrfam": "IPv4", 00:19:27.508 "traddr": "10.0.0.1", 00:19:27.508 "trsvcid": "35478" 00:19:27.508 }, 00:19:27.508 "auth": { 00:19:27.508 "state": "completed", 00:19:27.508 "digest": "sha256", 00:19:27.508 "dhgroup": "ffdhe4096" 00:19:27.508 } 00:19:27.508 } 00:19:27.508 ]' 00:19:27.508 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.508 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.508 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.508 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.508 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.508 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.508 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.508 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.766 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:19:27.766 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:19:28.700 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.700 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.700 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.700 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.700 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.700 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.700 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.700 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:28.700 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:28.958 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:28.958 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.958 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:28.958 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:28.958 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:28.958 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.958 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.958 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.958 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.958 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.958 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.958 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.958 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.522 00:19:29.522 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.522 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.522 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.087 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.087 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.087 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.087 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.087 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.087 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.087 { 00:19:30.087 "cntlid": 33, 00:19:30.087 "qid": 0, 00:19:30.087 "state": "enabled", 00:19:30.087 "thread": "nvmf_tgt_poll_group_000", 00:19:30.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:30.087 "listen_address": { 00:19:30.087 "trtype": "TCP", 00:19:30.087 "adrfam": "IPv4", 00:19:30.087 "traddr": "10.0.0.2", 00:19:30.087 "trsvcid": "4420" 00:19:30.087 }, 00:19:30.087 "peer_address": { 00:19:30.087 "trtype": "TCP", 00:19:30.087 "adrfam": "IPv4", 00:19:30.087 "traddr": "10.0.0.1", 00:19:30.087 "trsvcid": "34406" 00:19:30.087 }, 00:19:30.087 "auth": { 00:19:30.087 "state": "completed", 00:19:30.087 "digest": "sha256", 00:19:30.087 "dhgroup": "ffdhe6144" 00:19:30.087 } 00:19:30.087 } 00:19:30.087 ]' 00:19:30.087 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.087 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.087 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.087 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:30.087 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.087 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.087 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.087 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.344 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:19:30.344 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:19:31.275 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.275 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.275 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.275 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.275 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.275 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.275 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:31.275 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:31.533 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:31.533 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.533 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:31.533 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:31.533 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:31.533 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.533 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.533 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.533 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.533 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.533 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.533 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.533 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.099 00:19:32.099 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.099 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.099 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.357 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.357 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.357 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.357 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.357 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.357 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.357 { 00:19:32.357 "cntlid": 35, 00:19:32.357 "qid": 0, 00:19:32.357 "state": "enabled", 00:19:32.357 "thread": "nvmf_tgt_poll_group_000", 00:19:32.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:32.357 "listen_address": { 00:19:32.357 "trtype": "TCP", 00:19:32.357 "adrfam": "IPv4", 00:19:32.357 "traddr": "10.0.0.2", 00:19:32.357 "trsvcid": "4420" 00:19:32.357 }, 00:19:32.357 "peer_address": { 00:19:32.357 "trtype": "TCP", 00:19:32.357 "adrfam": "IPv4", 00:19:32.357 "traddr": "10.0.0.1", 00:19:32.357 "trsvcid": "34434" 00:19:32.357 }, 00:19:32.357 "auth": { 00:19:32.357 "state": "completed", 00:19:32.357 "digest": "sha256", 00:19:32.357 "dhgroup": "ffdhe6144" 00:19:32.357 } 00:19:32.357 } 00:19:32.357 ]' 00:19:32.357 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.357 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.357 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.357 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.357 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.629 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.629 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.629 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.889 04:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:19:32.889 04:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:19:33.823 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.823 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.823 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.823 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.823 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.823 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.823 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:33.823 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.082 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:34.082 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.082 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.082 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:34.082 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:34.082 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.082 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.082 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.082 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.082 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.082 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.082 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.082 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.648 00:19:34.648 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.648 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.648 04:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.906 04:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.906 04:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.906 04:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.906 04:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.906 04:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.906 04:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.906 { 00:19:34.906 "cntlid": 37, 00:19:34.906 "qid": 0, 00:19:34.906 "state": "enabled", 00:19:34.906 "thread": "nvmf_tgt_poll_group_000", 00:19:34.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:34.906 "listen_address": { 00:19:34.906 "trtype": "TCP", 00:19:34.906 "adrfam": "IPv4", 00:19:34.906 "traddr": "10.0.0.2", 00:19:34.906 "trsvcid": "4420" 00:19:34.906 }, 00:19:34.906 "peer_address": { 00:19:34.906 "trtype": "TCP", 00:19:34.906 "adrfam": "IPv4", 00:19:34.906 "traddr": "10.0.0.1", 00:19:34.906 "trsvcid": "34460" 00:19:34.906 }, 00:19:34.906 "auth": { 00:19:34.906 "state": "completed", 00:19:34.906 "digest": "sha256", 00:19:34.906 "dhgroup": "ffdhe6144" 00:19:34.906 } 00:19:34.906 } 00:19:34.906 ]' 00:19:34.906 04:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.906 04:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.906 04:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.906 04:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.906 04:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.906 04:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.906 04:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.906 04:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.165 04:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:19:35.165 04:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:19:36.097 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.097 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.097 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.097 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.097 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.097 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.097 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:36.097 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:36.354 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:36.354 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.354 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:36.354 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:36.354 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:36.354 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.354 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:36.354 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.354 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.354 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.354 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:36.354 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.354 04:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.919 00:19:36.919 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.919 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.919 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.486 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.486 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.486 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.486 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.486 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.486 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.486 { 00:19:37.486 "cntlid": 39, 00:19:37.486 "qid": 0, 00:19:37.486 "state": "enabled", 00:19:37.486 "thread": "nvmf_tgt_poll_group_000", 00:19:37.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:37.486 "listen_address": { 00:19:37.486 "trtype": "TCP", 00:19:37.486 "adrfam": "IPv4", 00:19:37.486 "traddr": "10.0.0.2", 00:19:37.486 "trsvcid": "4420" 00:19:37.486 }, 00:19:37.486 "peer_address": { 00:19:37.486 "trtype": "TCP", 00:19:37.486 "adrfam": "IPv4", 00:19:37.486 "traddr": "10.0.0.1", 00:19:37.486 "trsvcid": "34486" 00:19:37.486 }, 00:19:37.486 "auth": { 00:19:37.486 "state": "completed", 00:19:37.486 "digest": "sha256", 00:19:37.486 "dhgroup": "ffdhe6144" 00:19:37.486 } 00:19:37.486 } 00:19:37.486 ]' 00:19:37.486 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.486 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.486 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.486 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:37.486 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.486 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.486 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.486 04:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.744 04:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:19:37.744 04:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:19:38.678 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.678 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.678 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.678 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.678 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.678 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.678 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.678 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:38.678 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:38.937 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:38.937 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.937 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:38.937 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:38.937 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:38.937 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.937 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.937 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.937 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.937 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.937 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.937 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.937 04:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.874 00:19:39.874 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.874 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.874 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.133 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.133 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.133 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.133 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.133 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.133 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.133 { 00:19:40.133 "cntlid": 41, 00:19:40.133 "qid": 0, 00:19:40.133 "state": "enabled", 00:19:40.133 "thread": "nvmf_tgt_poll_group_000", 00:19:40.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:40.133 "listen_address": { 00:19:40.133 "trtype": "TCP", 00:19:40.133 "adrfam": "IPv4", 00:19:40.133 "traddr": "10.0.0.2", 00:19:40.133 "trsvcid": "4420" 00:19:40.133 }, 00:19:40.133 "peer_address": { 00:19:40.133 "trtype": "TCP", 00:19:40.133 "adrfam": "IPv4", 00:19:40.133 "traddr": "10.0.0.1", 00:19:40.133 "trsvcid": "52436" 00:19:40.133 }, 00:19:40.133 "auth": { 00:19:40.133 "state": "completed", 00:19:40.133 "digest": "sha256", 00:19:40.133 "dhgroup": "ffdhe8192" 00:19:40.133 } 00:19:40.133 } 00:19:40.133 ]' 00:19:40.133 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.133 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.133 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.391 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.391 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.391 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.391 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.391 04:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.650 04:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:19:40.650 04:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:19:41.584 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.584 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.584 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.584 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.584 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.584 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.584 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:41.584 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:41.842 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:41.842 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.842 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.842 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:41.842 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:41.842 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.842 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.842 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.842 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.842 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.842 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.842 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.842 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.778 00:19:42.778 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.778 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.778 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.037 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.037 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.037 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.037 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.037 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.037 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.037 { 00:19:43.037 "cntlid": 43, 00:19:43.037 "qid": 0, 00:19:43.037 "state": "enabled", 00:19:43.037 "thread": "nvmf_tgt_poll_group_000", 00:19:43.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:43.037 "listen_address": { 00:19:43.037 "trtype": "TCP", 00:19:43.037 "adrfam": "IPv4", 00:19:43.037 "traddr": "10.0.0.2", 00:19:43.037 "trsvcid": "4420" 00:19:43.037 }, 00:19:43.037 "peer_address": { 00:19:43.037 "trtype": "TCP", 00:19:43.037 "adrfam": "IPv4", 00:19:43.037 "traddr": "10.0.0.1", 00:19:43.037 "trsvcid": "52472" 00:19:43.037 }, 00:19:43.037 "auth": { 00:19:43.037 "state": "completed", 00:19:43.037 "digest": "sha256", 00:19:43.037 "dhgroup": "ffdhe8192" 00:19:43.037 } 00:19:43.037 } 00:19:43.037 ]' 00:19:43.037 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.037 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.037 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.037 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.037 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.295 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.296 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.296 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.554 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:19:43.554 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:19:44.488 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.488 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.488 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.488 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.488 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.488 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.488 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.488 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.746 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:44.746 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.746 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.746 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:44.746 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:44.746 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.746 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.746 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.746 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.746 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.746 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.746 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.746 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.680 00:19:45.681 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.681 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.681 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.939 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.939 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.939 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.939 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.939 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.939 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.939 { 00:19:45.939 "cntlid": 45, 00:19:45.939 "qid": 0, 00:19:45.939 "state": "enabled", 00:19:45.939 "thread": "nvmf_tgt_poll_group_000", 00:19:45.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:45.939 "listen_address": { 00:19:45.939 "trtype": "TCP", 00:19:45.939 "adrfam": "IPv4", 00:19:45.939 "traddr": "10.0.0.2", 00:19:45.939 "trsvcid": "4420" 00:19:45.939 }, 00:19:45.939 "peer_address": { 00:19:45.939 "trtype": "TCP", 00:19:45.939 "adrfam": "IPv4", 00:19:45.939 "traddr": "10.0.0.1", 00:19:45.939 "trsvcid": "52498" 00:19:45.939 }, 00:19:45.939 "auth": { 00:19:45.939 "state": "completed", 00:19:45.939 "digest": "sha256", 00:19:45.939 "dhgroup": "ffdhe8192" 00:19:45.939 } 00:19:45.939 } 00:19:45.939 ]' 00:19:45.939 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.939 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.939 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.939 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.939 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.198 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.198 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.198 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.455 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:19:46.455 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:19:47.388 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.388 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.388 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.388 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.388 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.388 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.388 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.388 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.674 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:47.674 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.674 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.674 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:47.674 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:47.674 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.674 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:47.674 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.674 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.674 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.674 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:47.674 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.674 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.678 00:19:48.678 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.678 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.678 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.936 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.936 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.936 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.936 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.936 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.936 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.936 { 00:19:48.936 "cntlid": 47, 00:19:48.936 "qid": 0, 00:19:48.936 "state": "enabled", 00:19:48.936 "thread": "nvmf_tgt_poll_group_000", 00:19:48.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:48.936 "listen_address": { 00:19:48.936 "trtype": "TCP", 00:19:48.936 "adrfam": "IPv4", 00:19:48.936 "traddr": "10.0.0.2", 00:19:48.936 "trsvcid": "4420" 00:19:48.936 }, 00:19:48.936 "peer_address": { 00:19:48.936 "trtype": "TCP", 00:19:48.936 "adrfam": "IPv4", 00:19:48.936 "traddr": "10.0.0.1", 00:19:48.936 "trsvcid": "42022" 00:19:48.936 }, 00:19:48.936 "auth": { 00:19:48.936 "state": "completed", 00:19:48.936 "digest": "sha256", 00:19:48.936 "dhgroup": "ffdhe8192" 00:19:48.936 } 00:19:48.936 } 00:19:48.936 ]' 00:19:48.936 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.936 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.936 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.936 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:48.936 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.936 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.937 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.937 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.194 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:19:49.194 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:19:50.567 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.567 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.567 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.567 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.567 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.567 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:50.567 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.567 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.567 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:50.567 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:50.567 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:50.567 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.567 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:50.567 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:50.567 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:50.567 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.567 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.567 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.567 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.567 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.568 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.568 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.568 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.827 00:19:50.827 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.827 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.827 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.396 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.396 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.396 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.396 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.396 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.396 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.396 { 00:19:51.396 "cntlid": 49, 00:19:51.396 "qid": 0, 00:19:51.396 "state": "enabled", 00:19:51.396 "thread": "nvmf_tgt_poll_group_000", 00:19:51.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:51.396 "listen_address": { 00:19:51.396 "trtype": "TCP", 00:19:51.396 "adrfam": "IPv4", 00:19:51.396 "traddr": "10.0.0.2", 00:19:51.396 "trsvcid": "4420" 00:19:51.396 }, 00:19:51.396 "peer_address": { 00:19:51.396 "trtype": "TCP", 00:19:51.396 "adrfam": "IPv4", 00:19:51.396 "traddr": "10.0.0.1", 00:19:51.396 "trsvcid": "42044" 00:19:51.396 }, 00:19:51.396 "auth": { 00:19:51.396 "state": "completed", 00:19:51.396 "digest": "sha384", 00:19:51.396 "dhgroup": "null" 00:19:51.396 } 00:19:51.396 } 00:19:51.396 ]' 00:19:51.396 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.396 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.396 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.396 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:51.396 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.396 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.396 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.396 04:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.654 04:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:19:51.654 04:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:19:52.590 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.590 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.590 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.590 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.590 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.590 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.590 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:52.590 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:52.849 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:52.849 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.849 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:52.849 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:52.849 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:52.849 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.849 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.849 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.849 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.849 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.849 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.849 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.849 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.415 00:19:53.415 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.415 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.415 04:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.674 04:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.674 04:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.674 04:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.674 04:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.674 04:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.674 04:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.674 { 00:19:53.674 "cntlid": 51, 00:19:53.674 "qid": 0, 00:19:53.674 "state": "enabled", 00:19:53.674 "thread": "nvmf_tgt_poll_group_000", 00:19:53.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:53.674 "listen_address": { 00:19:53.674 "trtype": "TCP", 00:19:53.674 "adrfam": "IPv4", 00:19:53.674 "traddr": "10.0.0.2", 00:19:53.674 "trsvcid": "4420" 00:19:53.674 }, 00:19:53.674 "peer_address": { 00:19:53.674 "trtype": "TCP", 00:19:53.674 "adrfam": "IPv4", 00:19:53.674 "traddr": "10.0.0.1", 00:19:53.674 "trsvcid": "42076" 00:19:53.674 }, 00:19:53.674 "auth": { 00:19:53.674 "state": "completed", 00:19:53.674 "digest": "sha384", 00:19:53.674 "dhgroup": "null" 00:19:53.674 } 00:19:53.674 } 00:19:53.674 ]' 00:19:53.674 04:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.674 04:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.674 04:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.674 04:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:53.674 04:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.674 04:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.674 04:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.674 04:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.932 04:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:19:53.932 04:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:19:54.867 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.867 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.867 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.867 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.867 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.867 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.868 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:54.868 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.126 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:55.126 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.126 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:55.126 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:55.126 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:55.126 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.126 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.126 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.126 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.126 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.126 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.126 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.126 04:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.692 00:19:55.692 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.692 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.692 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.950 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.950 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.950 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.950 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.950 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.950 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.950 { 00:19:55.950 "cntlid": 53, 00:19:55.950 "qid": 0, 00:19:55.950 "state": "enabled", 00:19:55.950 "thread": "nvmf_tgt_poll_group_000", 00:19:55.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:55.950 "listen_address": { 00:19:55.950 "trtype": "TCP", 00:19:55.950 "adrfam": "IPv4", 00:19:55.950 "traddr": "10.0.0.2", 00:19:55.950 "trsvcid": "4420" 00:19:55.950 }, 00:19:55.950 "peer_address": { 00:19:55.950 "trtype": "TCP", 00:19:55.950 "adrfam": "IPv4", 00:19:55.950 "traddr": "10.0.0.1", 00:19:55.950 "trsvcid": "42102" 00:19:55.950 }, 00:19:55.950 "auth": { 00:19:55.950 "state": "completed", 00:19:55.950 "digest": "sha384", 00:19:55.950 "dhgroup": "null" 00:19:55.950 } 00:19:55.950 } 00:19:55.950 ]' 00:19:55.950 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.950 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.950 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.950 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:55.950 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.208 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.208 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.208 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.467 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:19:56.467 04:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:19:57.401 04:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.401 04:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.401 04:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.401 04:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.401 04:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.401 04:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.401 04:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:57.401 04:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:57.660 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:57.660 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.660 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:57.660 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:57.660 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:57.660 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.660 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:57.660 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.660 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.660 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.660 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:57.660 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.660 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.227 00:19:58.227 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.227 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.227 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.486 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.486 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.486 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.486 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.486 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.486 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.486 { 00:19:58.486 "cntlid": 55, 00:19:58.486 "qid": 0, 00:19:58.486 "state": "enabled", 00:19:58.486 "thread": "nvmf_tgt_poll_group_000", 00:19:58.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:58.486 "listen_address": { 00:19:58.486 "trtype": "TCP", 00:19:58.486 "adrfam": "IPv4", 00:19:58.486 "traddr": "10.0.0.2", 00:19:58.486 "trsvcid": "4420" 00:19:58.486 }, 00:19:58.486 "peer_address": { 00:19:58.486 "trtype": "TCP", 00:19:58.486 "adrfam": "IPv4", 00:19:58.486 "traddr": "10.0.0.1", 00:19:58.486 "trsvcid": "44658" 00:19:58.486 }, 00:19:58.486 "auth": { 00:19:58.486 "state": "completed", 00:19:58.486 "digest": "sha384", 00:19:58.486 "dhgroup": "null" 00:19:58.486 } 00:19:58.486 } 00:19:58.486 ]' 00:19:58.486 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.486 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.486 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.486 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:58.486 04:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.486 04:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.486 04:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.486 04:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.744 04:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:19:58.744 04:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.117 04:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.684 00:20:00.684 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.684 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.684 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.943 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.943 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.943 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.943 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.943 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.943 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.943 { 00:20:00.943 "cntlid": 57, 00:20:00.943 "qid": 0, 00:20:00.943 "state": "enabled", 00:20:00.943 "thread": "nvmf_tgt_poll_group_000", 00:20:00.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:00.943 "listen_address": { 00:20:00.943 "trtype": "TCP", 00:20:00.943 "adrfam": "IPv4", 00:20:00.943 "traddr": "10.0.0.2", 00:20:00.943 "trsvcid": "4420" 00:20:00.943 }, 00:20:00.943 "peer_address": { 00:20:00.943 "trtype": "TCP", 00:20:00.943 "adrfam": "IPv4", 00:20:00.943 "traddr": "10.0.0.1", 00:20:00.943 "trsvcid": "44670" 00:20:00.943 }, 00:20:00.943 "auth": { 00:20:00.943 "state": "completed", 00:20:00.943 "digest": "sha384", 00:20:00.943 "dhgroup": "ffdhe2048" 00:20:00.943 } 00:20:00.943 } 00:20:00.943 ]' 00:20:00.943 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.943 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.943 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.943 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.943 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.943 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.943 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.943 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.201 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:20:01.201 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:20:02.136 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.136 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.136 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.136 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.136 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.136 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.136 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.136 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.395 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:02.395 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.395 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:02.395 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:02.395 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:02.395 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.395 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.395 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.395 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.395 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.395 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.395 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.395 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.961 00:20:02.961 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.961 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.961 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.219 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.219 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.219 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.219 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.219 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.219 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.219 { 00:20:03.219 "cntlid": 59, 00:20:03.219 "qid": 0, 00:20:03.219 "state": "enabled", 00:20:03.219 "thread": "nvmf_tgt_poll_group_000", 00:20:03.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:03.219 "listen_address": { 00:20:03.219 "trtype": "TCP", 00:20:03.219 "adrfam": "IPv4", 00:20:03.219 "traddr": "10.0.0.2", 00:20:03.219 "trsvcid": "4420" 00:20:03.219 }, 00:20:03.219 "peer_address": { 00:20:03.219 "trtype": "TCP", 00:20:03.219 "adrfam": "IPv4", 00:20:03.219 "traddr": "10.0.0.1", 00:20:03.219 "trsvcid": "44698" 00:20:03.219 }, 00:20:03.219 "auth": { 00:20:03.219 "state": "completed", 00:20:03.219 "digest": "sha384", 00:20:03.219 "dhgroup": "ffdhe2048" 00:20:03.219 } 00:20:03.219 } 00:20:03.219 ]' 00:20:03.219 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.219 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.219 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.219 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.219 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.219 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.219 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.219 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.478 04:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:20:03.478 04:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.852 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.417 00:20:05.417 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.417 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.417 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.675 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.675 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.675 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.675 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.675 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.675 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.675 { 00:20:05.675 "cntlid": 61, 00:20:05.675 "qid": 0, 00:20:05.675 "state": "enabled", 00:20:05.675 "thread": "nvmf_tgt_poll_group_000", 00:20:05.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:05.675 "listen_address": { 00:20:05.675 "trtype": "TCP", 00:20:05.675 "adrfam": "IPv4", 00:20:05.675 "traddr": "10.0.0.2", 00:20:05.675 "trsvcid": "4420" 00:20:05.675 }, 00:20:05.675 "peer_address": { 00:20:05.675 "trtype": "TCP", 00:20:05.675 "adrfam": "IPv4", 00:20:05.675 "traddr": "10.0.0.1", 00:20:05.675 "trsvcid": "44726" 00:20:05.675 }, 00:20:05.675 "auth": { 00:20:05.675 "state": "completed", 00:20:05.675 "digest": "sha384", 00:20:05.675 "dhgroup": "ffdhe2048" 00:20:05.675 } 00:20:05.675 } 00:20:05.675 ]' 00:20:05.675 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.675 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.675 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.675 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:05.675 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.675 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.675 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.675 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.932 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:20:05.932 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:20:06.866 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.866 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.866 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.866 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.866 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.866 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.866 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.866 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.431 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:07.431 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.431 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.431 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:07.431 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.431 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.431 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:07.431 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.431 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.431 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.431 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:07.431 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.431 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.689 00:20:07.689 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.689 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.689 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.947 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.947 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.947 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.947 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.947 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.947 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.947 { 00:20:07.947 "cntlid": 63, 00:20:07.947 "qid": 0, 00:20:07.947 "state": "enabled", 00:20:07.947 "thread": "nvmf_tgt_poll_group_000", 00:20:07.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:07.947 "listen_address": { 00:20:07.947 "trtype": "TCP", 00:20:07.947 "adrfam": "IPv4", 00:20:07.947 "traddr": "10.0.0.2", 00:20:07.947 "trsvcid": "4420" 00:20:07.947 }, 00:20:07.947 "peer_address": { 00:20:07.947 "trtype": "TCP", 00:20:07.947 "adrfam": "IPv4", 00:20:07.947 "traddr": "10.0.0.1", 00:20:07.947 "trsvcid": "44754" 00:20:07.947 }, 00:20:07.947 "auth": { 00:20:07.947 "state": "completed", 00:20:07.947 "digest": "sha384", 00:20:07.947 "dhgroup": "ffdhe2048" 00:20:07.947 } 00:20:07.947 } 00:20:07.947 ]' 00:20:07.947 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.947 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.947 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.947 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:07.947 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.947 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.947 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.947 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.205 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:20:08.205 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:20:09.590 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.590 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.590 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.590 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.590 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.590 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.590 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.590 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.590 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.590 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:09.590 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.590 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.590 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:09.590 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:09.590 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.590 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.590 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.590 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.590 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.590 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.590 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.590 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.848 00:20:10.106 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.106 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.106 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.364 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.364 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.364 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.364 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.364 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.364 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.364 { 00:20:10.364 "cntlid": 65, 00:20:10.364 "qid": 0, 00:20:10.364 "state": "enabled", 00:20:10.364 "thread": "nvmf_tgt_poll_group_000", 00:20:10.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:10.364 "listen_address": { 00:20:10.364 "trtype": "TCP", 00:20:10.364 "adrfam": "IPv4", 00:20:10.364 "traddr": "10.0.0.2", 00:20:10.364 "trsvcid": "4420" 00:20:10.364 }, 00:20:10.364 "peer_address": { 00:20:10.364 "trtype": "TCP", 00:20:10.364 "adrfam": "IPv4", 00:20:10.364 "traddr": "10.0.0.1", 00:20:10.364 "trsvcid": "50018" 00:20:10.364 }, 00:20:10.364 "auth": { 00:20:10.364 "state": "completed", 00:20:10.364 "digest": "sha384", 00:20:10.364 "dhgroup": "ffdhe3072" 00:20:10.364 } 00:20:10.364 } 00:20:10.364 ]' 00:20:10.364 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.365 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.365 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.365 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:10.365 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.365 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.365 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.365 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.622 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:20:10.623 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:20:11.555 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.555 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.555 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.555 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.555 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.556 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.556 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.556 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:12.122 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:12.122 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.122 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:12.122 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:12.122 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:12.122 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.122 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.122 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.122 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.122 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.122 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.122 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.122 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.381 00:20:12.381 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.381 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.381 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.639 04:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.639 04:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.639 04:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.639 04:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.639 04:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.639 04:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.639 { 00:20:12.639 "cntlid": 67, 00:20:12.639 "qid": 0, 00:20:12.639 "state": "enabled", 00:20:12.639 "thread": "nvmf_tgt_poll_group_000", 00:20:12.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:12.639 "listen_address": { 00:20:12.639 "trtype": "TCP", 00:20:12.639 "adrfam": "IPv4", 00:20:12.639 "traddr": "10.0.0.2", 00:20:12.639 "trsvcid": "4420" 00:20:12.639 }, 00:20:12.639 "peer_address": { 00:20:12.639 "trtype": "TCP", 00:20:12.639 "adrfam": "IPv4", 00:20:12.639 "traddr": "10.0.0.1", 00:20:12.639 "trsvcid": "50062" 00:20:12.639 }, 00:20:12.639 "auth": { 00:20:12.639 "state": "completed", 00:20:12.639 "digest": "sha384", 00:20:12.639 "dhgroup": "ffdhe3072" 00:20:12.639 } 00:20:12.639 } 00:20:12.639 ]' 00:20:12.639 04:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.639 04:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.639 04:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.639 04:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:12.639 04:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.900 04:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.900 04:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.900 04:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.158 04:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:20:13.158 04:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:20:14.090 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.090 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.090 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.090 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.090 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.090 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.090 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.090 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.347 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:14.347 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.347 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.347 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:14.347 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:14.347 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.347 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.347 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.347 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.347 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.347 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.347 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.347 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.912 00:20:14.912 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.912 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.912 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.168 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.168 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.168 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.169 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.169 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.169 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.169 { 00:20:15.169 "cntlid": 69, 00:20:15.169 "qid": 0, 00:20:15.169 "state": "enabled", 00:20:15.169 "thread": "nvmf_tgt_poll_group_000", 00:20:15.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:15.169 "listen_address": { 00:20:15.169 "trtype": "TCP", 00:20:15.169 "adrfam": "IPv4", 00:20:15.169 "traddr": "10.0.0.2", 00:20:15.169 "trsvcid": "4420" 00:20:15.169 }, 00:20:15.169 "peer_address": { 00:20:15.169 "trtype": "TCP", 00:20:15.169 "adrfam": "IPv4", 00:20:15.169 "traddr": "10.0.0.1", 00:20:15.169 "trsvcid": "50084" 00:20:15.169 }, 00:20:15.169 "auth": { 00:20:15.169 "state": "completed", 00:20:15.169 "digest": "sha384", 00:20:15.169 "dhgroup": "ffdhe3072" 00:20:15.169 } 00:20:15.169 } 00:20:15.169 ]' 00:20:15.169 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.169 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.169 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.169 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:15.169 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.169 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.169 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.169 04:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.426 04:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:20:15.426 04:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:20:16.798 04:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.798 04:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.798 04:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.798 04:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.798 04:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.798 04:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.798 04:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:16.798 04:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:16.798 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:16.798 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.798 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.798 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:16.798 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:16.798 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.798 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:16.798 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.798 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.798 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.798 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:16.798 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.798 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.364 00:20:17.364 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.364 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.364 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.622 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.622 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.622 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.622 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.622 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.622 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.622 { 00:20:17.622 "cntlid": 71, 00:20:17.622 "qid": 0, 00:20:17.622 "state": "enabled", 00:20:17.622 "thread": "nvmf_tgt_poll_group_000", 00:20:17.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:17.622 "listen_address": { 00:20:17.622 "trtype": "TCP", 00:20:17.622 "adrfam": "IPv4", 00:20:17.622 "traddr": "10.0.0.2", 00:20:17.622 "trsvcid": "4420" 00:20:17.622 }, 00:20:17.622 "peer_address": { 00:20:17.622 "trtype": "TCP", 00:20:17.622 "adrfam": "IPv4", 00:20:17.622 "traddr": "10.0.0.1", 00:20:17.622 "trsvcid": "50102" 00:20:17.622 }, 00:20:17.622 "auth": { 00:20:17.622 "state": "completed", 00:20:17.622 "digest": "sha384", 00:20:17.622 "dhgroup": "ffdhe3072" 00:20:17.622 } 00:20:17.622 } 00:20:17.622 ]' 00:20:17.622 04:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.622 04:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.622 04:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.622 04:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:17.622 04:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.622 04:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.622 04:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.622 04:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.880 04:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:20:17.881 04:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:20:18.814 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.814 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.814 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.814 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.814 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.814 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.814 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.814 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:18.814 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:19.149 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:19.149 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.149 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.149 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:19.149 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:19.149 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.149 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.149 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.149 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.149 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.149 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.149 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.149 04:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.770 00:20:19.771 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.771 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.771 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.028 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.028 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.028 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.029 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.029 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.029 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.029 { 00:20:20.029 "cntlid": 73, 00:20:20.029 "qid": 0, 00:20:20.029 "state": "enabled", 00:20:20.029 "thread": "nvmf_tgt_poll_group_000", 00:20:20.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:20.029 "listen_address": { 00:20:20.029 "trtype": "TCP", 00:20:20.029 "adrfam": "IPv4", 00:20:20.029 "traddr": "10.0.0.2", 00:20:20.029 "trsvcid": "4420" 00:20:20.029 }, 00:20:20.029 "peer_address": { 00:20:20.029 "trtype": "TCP", 00:20:20.029 "adrfam": "IPv4", 00:20:20.029 "traddr": "10.0.0.1", 00:20:20.029 "trsvcid": "51186" 00:20:20.029 }, 00:20:20.029 "auth": { 00:20:20.029 "state": "completed", 00:20:20.029 "digest": "sha384", 00:20:20.029 "dhgroup": "ffdhe4096" 00:20:20.029 } 00:20:20.029 } 00:20:20.029 ]' 00:20:20.029 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.029 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.029 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.029 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:20.029 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.029 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.029 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.029 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.286 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:20:20.286 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:20:21.220 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.220 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.220 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.220 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.220 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.220 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.220 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.220 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.788 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:21.788 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.788 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.788 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:21.788 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:21.788 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.788 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.788 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.788 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.788 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.788 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.788 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.788 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.047 00:20:22.047 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.047 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.047 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.305 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.305 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.305 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.305 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.305 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.305 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.305 { 00:20:22.305 "cntlid": 75, 00:20:22.305 "qid": 0, 00:20:22.305 "state": "enabled", 00:20:22.305 "thread": "nvmf_tgt_poll_group_000", 00:20:22.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:22.305 "listen_address": { 00:20:22.305 "trtype": "TCP", 00:20:22.305 "adrfam": "IPv4", 00:20:22.305 "traddr": "10.0.0.2", 00:20:22.305 "trsvcid": "4420" 00:20:22.305 }, 00:20:22.305 "peer_address": { 00:20:22.305 "trtype": "TCP", 00:20:22.305 "adrfam": "IPv4", 00:20:22.305 "traddr": "10.0.0.1", 00:20:22.305 "trsvcid": "51204" 00:20:22.305 }, 00:20:22.305 "auth": { 00:20:22.305 "state": "completed", 00:20:22.305 "digest": "sha384", 00:20:22.305 "dhgroup": "ffdhe4096" 00:20:22.305 } 00:20:22.305 } 00:20:22.305 ]' 00:20:22.305 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.305 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.305 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.305 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:22.305 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.563 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.563 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.563 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.821 04:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:20:22.821 04:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:20:23.755 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.755 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.755 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.755 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.755 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.755 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.755 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:23.755 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.013 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:24.013 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.013 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.013 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:24.013 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:24.013 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.013 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.013 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.013 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.013 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.013 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.013 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.013 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.270 00:20:24.270 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.270 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.270 04:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.527 04:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.527 04:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.527 04:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.527 04:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.527 04:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.527 04:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.527 { 00:20:24.527 "cntlid": 77, 00:20:24.527 "qid": 0, 00:20:24.527 "state": "enabled", 00:20:24.527 "thread": "nvmf_tgt_poll_group_000", 00:20:24.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:24.527 "listen_address": { 00:20:24.527 "trtype": "TCP", 00:20:24.527 "adrfam": "IPv4", 00:20:24.528 "traddr": "10.0.0.2", 00:20:24.528 "trsvcid": "4420" 00:20:24.528 }, 00:20:24.528 "peer_address": { 00:20:24.528 "trtype": "TCP", 00:20:24.528 "adrfam": "IPv4", 00:20:24.528 "traddr": "10.0.0.1", 00:20:24.528 "trsvcid": "51240" 00:20:24.528 }, 00:20:24.528 "auth": { 00:20:24.528 "state": "completed", 00:20:24.528 "digest": "sha384", 00:20:24.528 "dhgroup": "ffdhe4096" 00:20:24.528 } 00:20:24.528 } 00:20:24.528 ]' 00:20:24.528 04:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.785 04:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.785 04:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.785 04:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:24.785 04:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.785 04:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.785 04:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.785 04:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.043 04:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:20:25.043 04:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:20:25.977 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.977 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.977 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.977 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.977 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.977 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.977 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:25.977 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:26.235 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:26.235 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.235 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.235 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:26.235 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:26.235 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.235 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:26.235 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.235 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.235 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.235 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:26.235 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.235 04:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.800 00:20:26.800 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.800 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.800 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.058 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.058 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.058 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.058 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.058 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.058 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.058 { 00:20:27.058 "cntlid": 79, 00:20:27.058 "qid": 0, 00:20:27.058 "state": "enabled", 00:20:27.058 "thread": "nvmf_tgt_poll_group_000", 00:20:27.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:27.058 "listen_address": { 00:20:27.058 "trtype": "TCP", 00:20:27.058 "adrfam": "IPv4", 00:20:27.058 "traddr": "10.0.0.2", 00:20:27.058 "trsvcid": "4420" 00:20:27.058 }, 00:20:27.058 "peer_address": { 00:20:27.058 "trtype": "TCP", 00:20:27.058 "adrfam": "IPv4", 00:20:27.058 "traddr": "10.0.0.1", 00:20:27.058 "trsvcid": "51260" 00:20:27.058 }, 00:20:27.058 "auth": { 00:20:27.058 "state": "completed", 00:20:27.058 "digest": "sha384", 00:20:27.058 "dhgroup": "ffdhe4096" 00:20:27.058 } 00:20:27.058 } 00:20:27.058 ]' 00:20:27.058 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.058 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.058 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.058 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:27.058 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.058 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.058 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.058 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.624 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:20:27.624 04:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:20:28.557 04:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.557 04:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.557 04:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.557 04:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.557 04:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.557 04:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.557 04:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.557 04:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.557 04:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.815 04:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:28.815 04:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.815 04:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.815 04:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:28.815 04:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:28.815 04:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.815 04:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.815 04:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.815 04:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.815 04:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.815 04:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.815 04:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.815 04:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.380 00:20:29.380 04:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.380 04:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.380 04:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.639 04:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.639 04:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.639 04:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.639 04:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.639 04:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.639 04:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.639 { 00:20:29.639 "cntlid": 81, 00:20:29.639 "qid": 0, 00:20:29.639 "state": "enabled", 00:20:29.639 "thread": "nvmf_tgt_poll_group_000", 00:20:29.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:29.639 "listen_address": { 00:20:29.639 "trtype": "TCP", 00:20:29.639 "adrfam": "IPv4", 00:20:29.639 "traddr": "10.0.0.2", 00:20:29.639 "trsvcid": "4420" 00:20:29.639 }, 00:20:29.639 "peer_address": { 00:20:29.639 "trtype": "TCP", 00:20:29.639 "adrfam": "IPv4", 00:20:29.639 "traddr": "10.0.0.1", 00:20:29.639 "trsvcid": "43448" 00:20:29.639 }, 00:20:29.639 "auth": { 00:20:29.639 "state": "completed", 00:20:29.639 "digest": "sha384", 00:20:29.639 "dhgroup": "ffdhe6144" 00:20:29.639 } 00:20:29.639 } 00:20:29.639 ]' 00:20:29.639 04:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.639 04:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.639 04:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.639 04:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.639 04:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.639 04:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.639 04:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.639 04:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.897 04:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:20:29.897 04:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.271 04:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.838 00:20:31.838 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.838 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.838 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.097 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.097 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.097 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.097 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.097 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.097 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.097 { 00:20:32.097 "cntlid": 83, 00:20:32.097 "qid": 0, 00:20:32.097 "state": "enabled", 00:20:32.097 "thread": "nvmf_tgt_poll_group_000", 00:20:32.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:32.097 "listen_address": { 00:20:32.097 "trtype": "TCP", 00:20:32.097 "adrfam": "IPv4", 00:20:32.097 "traddr": "10.0.0.2", 00:20:32.097 "trsvcid": "4420" 00:20:32.097 }, 00:20:32.097 "peer_address": { 00:20:32.097 "trtype": "TCP", 00:20:32.097 "adrfam": "IPv4", 00:20:32.097 "traddr": "10.0.0.1", 00:20:32.097 "trsvcid": "43478" 00:20:32.097 }, 00:20:32.097 "auth": { 00:20:32.097 "state": "completed", 00:20:32.097 "digest": "sha384", 00:20:32.097 "dhgroup": "ffdhe6144" 00:20:32.097 } 00:20:32.097 } 00:20:32.097 ]' 00:20:32.097 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.355 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.356 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.356 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.356 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.356 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.356 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.356 04:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.614 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:20:32.614 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:20:33.546 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.546 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.546 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.546 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.546 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.546 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.546 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.546 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.805 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:33.805 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.805 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.805 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:33.805 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:33.805 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.805 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.805 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.805 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.805 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.805 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.805 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.805 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.371 00:20:34.371 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.371 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.371 04:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.937 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.937 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.937 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.937 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.937 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.937 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.937 { 00:20:34.937 "cntlid": 85, 00:20:34.937 "qid": 0, 00:20:34.937 "state": "enabled", 00:20:34.937 "thread": "nvmf_tgt_poll_group_000", 00:20:34.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.937 "listen_address": { 00:20:34.937 "trtype": "TCP", 00:20:34.937 "adrfam": "IPv4", 00:20:34.937 "traddr": "10.0.0.2", 00:20:34.937 "trsvcid": "4420" 00:20:34.937 }, 00:20:34.937 "peer_address": { 00:20:34.937 "trtype": "TCP", 00:20:34.937 "adrfam": "IPv4", 00:20:34.937 "traddr": "10.0.0.1", 00:20:34.937 "trsvcid": "43506" 00:20:34.937 }, 00:20:34.937 "auth": { 00:20:34.937 "state": "completed", 00:20:34.937 "digest": "sha384", 00:20:34.937 "dhgroup": "ffdhe6144" 00:20:34.937 } 00:20:34.937 } 00:20:34.937 ]' 00:20:34.937 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.937 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.937 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.937 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:34.937 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.937 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.937 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.937 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.195 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:20:35.195 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:20:36.128 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.128 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.128 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.128 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.128 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.128 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.128 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:36.128 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:36.387 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:36.387 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.387 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.387 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:36.387 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:36.387 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.387 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:36.387 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.387 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.644 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.645 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:36.645 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.645 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.210 00:20:37.210 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.210 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.210 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.468 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.468 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.468 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.468 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.468 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.468 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.468 { 00:20:37.468 "cntlid": 87, 00:20:37.468 "qid": 0, 00:20:37.468 "state": "enabled", 00:20:37.468 "thread": "nvmf_tgt_poll_group_000", 00:20:37.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:37.468 "listen_address": { 00:20:37.468 "trtype": "TCP", 00:20:37.468 "adrfam": "IPv4", 00:20:37.468 "traddr": "10.0.0.2", 00:20:37.468 "trsvcid": "4420" 00:20:37.468 }, 00:20:37.468 "peer_address": { 00:20:37.468 "trtype": "TCP", 00:20:37.468 "adrfam": "IPv4", 00:20:37.468 "traddr": "10.0.0.1", 00:20:37.468 "trsvcid": "43522" 00:20:37.468 }, 00:20:37.468 "auth": { 00:20:37.468 "state": "completed", 00:20:37.468 "digest": "sha384", 00:20:37.468 "dhgroup": "ffdhe6144" 00:20:37.468 } 00:20:37.468 } 00:20:37.468 ]' 00:20:37.468 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.468 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.468 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.468 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.468 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.468 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.468 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.468 04:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.726 04:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:20:37.726 04:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:20:38.661 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.919 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.919 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.919 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.919 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.919 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.919 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.919 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.919 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:39.177 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:39.177 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.177 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.177 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:39.177 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:39.177 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.177 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.177 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.177 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.177 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.177 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.177 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.177 04:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.112 00:20:40.112 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.112 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.112 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.370 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.370 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.370 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.370 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.370 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.370 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.370 { 00:20:40.370 "cntlid": 89, 00:20:40.370 "qid": 0, 00:20:40.370 "state": "enabled", 00:20:40.370 "thread": "nvmf_tgt_poll_group_000", 00:20:40.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:40.370 "listen_address": { 00:20:40.370 "trtype": "TCP", 00:20:40.370 "adrfam": "IPv4", 00:20:40.370 "traddr": "10.0.0.2", 00:20:40.370 "trsvcid": "4420" 00:20:40.370 }, 00:20:40.370 "peer_address": { 00:20:40.370 "trtype": "TCP", 00:20:40.370 "adrfam": "IPv4", 00:20:40.370 "traddr": "10.0.0.1", 00:20:40.370 "trsvcid": "36208" 00:20:40.370 }, 00:20:40.370 "auth": { 00:20:40.370 "state": "completed", 00:20:40.370 "digest": "sha384", 00:20:40.370 "dhgroup": "ffdhe8192" 00:20:40.370 } 00:20:40.370 } 00:20:40.370 ]' 00:20:40.370 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.370 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.370 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.370 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:40.370 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.628 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.628 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.628 04:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.885 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:20:40.885 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:20:41.816 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.816 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.816 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.816 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.816 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.816 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.816 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:41.816 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.073 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:42.073 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.073 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.073 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:42.073 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:42.073 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.073 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.073 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.073 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.073 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.073 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.073 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.073 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.006 00:20:43.006 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.006 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.006 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.573 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.573 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.573 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.573 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.573 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.573 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.573 { 00:20:43.573 "cntlid": 91, 00:20:43.573 "qid": 0, 00:20:43.573 "state": "enabled", 00:20:43.573 "thread": "nvmf_tgt_poll_group_000", 00:20:43.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:43.573 "listen_address": { 00:20:43.573 "trtype": "TCP", 00:20:43.573 "adrfam": "IPv4", 00:20:43.573 "traddr": "10.0.0.2", 00:20:43.573 "trsvcid": "4420" 00:20:43.573 }, 00:20:43.573 "peer_address": { 00:20:43.573 "trtype": "TCP", 00:20:43.573 "adrfam": "IPv4", 00:20:43.573 "traddr": "10.0.0.1", 00:20:43.573 "trsvcid": "36246" 00:20:43.573 }, 00:20:43.573 "auth": { 00:20:43.573 "state": "completed", 00:20:43.573 "digest": "sha384", 00:20:43.573 "dhgroup": "ffdhe8192" 00:20:43.573 } 00:20:43.573 } 00:20:43.573 ]' 00:20:43.573 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.573 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.573 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.573 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:43.573 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.573 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.573 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.573 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.831 04:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:20:43.831 04:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:20:44.764 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.764 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.764 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.764 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.764 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.764 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.764 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:44.764 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:45.022 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:45.022 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.022 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.022 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:45.022 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:45.022 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.022 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.022 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.022 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.022 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.022 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.022 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.022 04:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.955 00:20:45.955 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.955 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.955 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.213 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.213 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.213 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.213 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.213 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.213 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.213 { 00:20:46.213 "cntlid": 93, 00:20:46.213 "qid": 0, 00:20:46.213 "state": "enabled", 00:20:46.213 "thread": "nvmf_tgt_poll_group_000", 00:20:46.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:46.213 "listen_address": { 00:20:46.213 "trtype": "TCP", 00:20:46.213 "adrfam": "IPv4", 00:20:46.213 "traddr": "10.0.0.2", 00:20:46.213 "trsvcid": "4420" 00:20:46.213 }, 00:20:46.213 "peer_address": { 00:20:46.213 "trtype": "TCP", 00:20:46.213 "adrfam": "IPv4", 00:20:46.213 "traddr": "10.0.0.1", 00:20:46.213 "trsvcid": "36264" 00:20:46.213 }, 00:20:46.213 "auth": { 00:20:46.213 "state": "completed", 00:20:46.213 "digest": "sha384", 00:20:46.213 "dhgroup": "ffdhe8192" 00:20:46.213 } 00:20:46.213 } 00:20:46.213 ]' 00:20:46.213 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.471 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.471 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.471 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:46.471 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.471 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.471 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.471 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.729 04:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:20:46.729 04:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:20:47.663 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.663 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.663 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.663 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.663 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.663 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.663 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:47.663 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:47.921 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:47.921 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.921 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.921 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:47.921 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:47.921 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.921 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:47.921 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.921 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.921 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.921 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:47.921 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.921 04:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.854 00:20:48.854 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.854 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.854 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.111 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.111 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.111 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.111 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.112 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.112 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.112 { 00:20:49.112 "cntlid": 95, 00:20:49.112 "qid": 0, 00:20:49.112 "state": "enabled", 00:20:49.112 "thread": "nvmf_tgt_poll_group_000", 00:20:49.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:49.112 "listen_address": { 00:20:49.112 "trtype": "TCP", 00:20:49.112 "adrfam": "IPv4", 00:20:49.112 "traddr": "10.0.0.2", 00:20:49.112 "trsvcid": "4420" 00:20:49.112 }, 00:20:49.112 "peer_address": { 00:20:49.112 "trtype": "TCP", 00:20:49.112 "adrfam": "IPv4", 00:20:49.112 "traddr": "10.0.0.1", 00:20:49.112 "trsvcid": "35134" 00:20:49.112 }, 00:20:49.112 "auth": { 00:20:49.112 "state": "completed", 00:20:49.112 "digest": "sha384", 00:20:49.112 "dhgroup": "ffdhe8192" 00:20:49.112 } 00:20:49.112 } 00:20:49.112 ]' 00:20:49.112 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.112 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.112 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.112 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.112 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.374 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.374 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.374 04:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.682 04:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:20:49.682 04:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:20:50.641 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.641 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.641 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.641 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.641 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.641 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:50.641 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.641 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.641 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.641 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.899 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:50.899 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.899 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:50.899 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:50.899 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:50.899 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.899 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.899 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.899 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.899 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.899 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.899 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.899 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.157 00:20:51.157 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.157 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.157 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.415 04:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.415 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.415 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.415 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.673 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.673 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.673 { 00:20:51.673 "cntlid": 97, 00:20:51.673 "qid": 0, 00:20:51.673 "state": "enabled", 00:20:51.673 "thread": "nvmf_tgt_poll_group_000", 00:20:51.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:51.673 "listen_address": { 00:20:51.673 "trtype": "TCP", 00:20:51.673 "adrfam": "IPv4", 00:20:51.673 "traddr": "10.0.0.2", 00:20:51.673 "trsvcid": "4420" 00:20:51.673 }, 00:20:51.673 "peer_address": { 00:20:51.673 "trtype": "TCP", 00:20:51.673 "adrfam": "IPv4", 00:20:51.673 "traddr": "10.0.0.1", 00:20:51.673 "trsvcid": "35156" 00:20:51.673 }, 00:20:51.673 "auth": { 00:20:51.673 "state": "completed", 00:20:51.673 "digest": "sha512", 00:20:51.673 "dhgroup": "null" 00:20:51.673 } 00:20:51.673 } 00:20:51.673 ]' 00:20:51.673 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.673 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.673 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.673 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:51.673 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.673 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.673 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.673 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.931 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:20:51.931 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:20:52.865 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.865 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.865 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.865 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.865 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.865 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.865 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.865 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:53.123 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:53.123 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.123 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:53.123 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:53.123 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:53.123 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.123 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.123 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.123 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.123 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.123 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.123 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.123 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.380 00:20:53.639 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.639 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.639 04:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.898 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.898 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.898 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.898 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.898 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.898 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.898 { 00:20:53.898 "cntlid": 99, 00:20:53.898 "qid": 0, 00:20:53.898 "state": "enabled", 00:20:53.898 "thread": "nvmf_tgt_poll_group_000", 00:20:53.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:53.898 "listen_address": { 00:20:53.898 "trtype": "TCP", 00:20:53.898 "adrfam": "IPv4", 00:20:53.898 "traddr": "10.0.0.2", 00:20:53.898 "trsvcid": "4420" 00:20:53.898 }, 00:20:53.898 "peer_address": { 00:20:53.898 "trtype": "TCP", 00:20:53.898 "adrfam": "IPv4", 00:20:53.898 "traddr": "10.0.0.1", 00:20:53.898 "trsvcid": "35192" 00:20:53.898 }, 00:20:53.898 "auth": { 00:20:53.898 "state": "completed", 00:20:53.898 "digest": "sha512", 00:20:53.898 "dhgroup": "null" 00:20:53.898 } 00:20:53.898 } 00:20:53.898 ]' 00:20:53.898 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.898 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.898 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.898 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:53.898 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.898 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.898 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.898 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.464 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:20:54.464 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:20:55.397 04:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.397 04:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.397 04:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.397 04:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.397 04:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.397 04:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.397 04:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.397 04:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.655 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:55.655 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.655 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.655 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.655 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:55.655 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.655 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.655 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.655 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.655 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.655 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.655 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.655 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.913 00:20:55.913 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.913 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.913 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.171 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.171 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.171 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.171 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.171 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.171 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.171 { 00:20:56.171 "cntlid": 101, 00:20:56.171 "qid": 0, 00:20:56.171 "state": "enabled", 00:20:56.171 "thread": "nvmf_tgt_poll_group_000", 00:20:56.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:56.171 "listen_address": { 00:20:56.171 "trtype": "TCP", 00:20:56.171 "adrfam": "IPv4", 00:20:56.171 "traddr": "10.0.0.2", 00:20:56.171 "trsvcid": "4420" 00:20:56.171 }, 00:20:56.171 "peer_address": { 00:20:56.171 "trtype": "TCP", 00:20:56.171 "adrfam": "IPv4", 00:20:56.171 "traddr": "10.0.0.1", 00:20:56.171 "trsvcid": "35232" 00:20:56.171 }, 00:20:56.171 "auth": { 00:20:56.171 "state": "completed", 00:20:56.171 "digest": "sha512", 00:20:56.171 "dhgroup": "null" 00:20:56.171 } 00:20:56.171 } 00:20:56.171 ]' 00:20:56.171 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.430 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.430 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.430 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:56.430 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.430 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.430 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.430 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.687 04:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:20:56.687 04:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:20:57.621 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.621 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.621 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.621 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.621 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.621 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.621 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.621 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.878 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:57.879 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.879 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.879 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.879 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.879 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.879 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:57.879 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.879 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.879 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.879 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.879 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.879 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.443 00:20:58.443 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.443 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.443 04:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.700 04:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.700 04:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.700 04:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.700 04:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.700 04:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.700 04:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.700 { 00:20:58.700 "cntlid": 103, 00:20:58.700 "qid": 0, 00:20:58.700 "state": "enabled", 00:20:58.700 "thread": "nvmf_tgt_poll_group_000", 00:20:58.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:58.700 "listen_address": { 00:20:58.700 "trtype": "TCP", 00:20:58.700 "adrfam": "IPv4", 00:20:58.700 "traddr": "10.0.0.2", 00:20:58.700 "trsvcid": "4420" 00:20:58.700 }, 00:20:58.700 "peer_address": { 00:20:58.700 "trtype": "TCP", 00:20:58.700 "adrfam": "IPv4", 00:20:58.700 "traddr": "10.0.0.1", 00:20:58.700 "trsvcid": "43674" 00:20:58.700 }, 00:20:58.700 "auth": { 00:20:58.700 "state": "completed", 00:20:58.700 "digest": "sha512", 00:20:58.700 "dhgroup": "null" 00:20:58.700 } 00:20:58.700 } 00:20:58.700 ]' 00:20:58.700 04:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.700 04:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.700 04:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.700 04:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:58.700 04:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.700 04:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.700 04:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.700 04:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.957 04:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:20:58.957 04:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.329 04:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.895 00:21:00.895 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.895 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.895 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.153 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.153 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.153 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.153 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.153 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.153 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.153 { 00:21:01.153 "cntlid": 105, 00:21:01.153 "qid": 0, 00:21:01.153 "state": "enabled", 00:21:01.153 "thread": "nvmf_tgt_poll_group_000", 00:21:01.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:01.153 "listen_address": { 00:21:01.153 "trtype": "TCP", 00:21:01.153 "adrfam": "IPv4", 00:21:01.153 "traddr": "10.0.0.2", 00:21:01.153 "trsvcid": "4420" 00:21:01.153 }, 00:21:01.153 "peer_address": { 00:21:01.153 "trtype": "TCP", 00:21:01.153 "adrfam": "IPv4", 00:21:01.153 "traddr": "10.0.0.1", 00:21:01.153 "trsvcid": "43704" 00:21:01.153 }, 00:21:01.153 "auth": { 00:21:01.153 "state": "completed", 00:21:01.153 "digest": "sha512", 00:21:01.153 "dhgroup": "ffdhe2048" 00:21:01.153 } 00:21:01.153 } 00:21:01.153 ]' 00:21:01.153 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.153 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.153 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.153 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:01.153 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.153 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.153 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.153 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.412 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:21:01.412 04:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:21:02.346 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.346 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.346 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.346 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.346 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.346 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.346 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.346 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.604 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:02.604 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.604 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.604 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:02.604 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:02.604 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.862 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.862 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.862 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.862 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.862 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.862 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.862 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.120 00:21:03.120 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.120 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.120 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.378 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.378 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.378 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.378 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.378 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.378 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.378 { 00:21:03.378 "cntlid": 107, 00:21:03.378 "qid": 0, 00:21:03.378 "state": "enabled", 00:21:03.378 "thread": "nvmf_tgt_poll_group_000", 00:21:03.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:03.378 "listen_address": { 00:21:03.378 "trtype": "TCP", 00:21:03.378 "adrfam": "IPv4", 00:21:03.378 "traddr": "10.0.0.2", 00:21:03.378 "trsvcid": "4420" 00:21:03.378 }, 00:21:03.378 "peer_address": { 00:21:03.378 "trtype": "TCP", 00:21:03.378 "adrfam": "IPv4", 00:21:03.378 "traddr": "10.0.0.1", 00:21:03.378 "trsvcid": "43726" 00:21:03.378 }, 00:21:03.378 "auth": { 00:21:03.378 "state": "completed", 00:21:03.378 "digest": "sha512", 00:21:03.378 "dhgroup": "ffdhe2048" 00:21:03.378 } 00:21:03.378 } 00:21:03.378 ]' 00:21:03.378 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.378 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.378 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.378 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:03.378 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.378 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.378 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.378 04:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.636 04:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:21:03.636 04:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:21:04.571 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.829 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.829 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.829 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.829 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.829 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.829 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.829 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.087 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:05.087 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.087 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.087 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:05.087 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:05.087 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.087 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.087 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.087 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.087 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.087 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.087 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.087 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.345 00:21:05.345 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.345 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.345 04:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.603 04:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.603 04:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.603 04:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.603 04:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.603 04:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.603 04:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.603 { 00:21:05.603 "cntlid": 109, 00:21:05.603 "qid": 0, 00:21:05.603 "state": "enabled", 00:21:05.603 "thread": "nvmf_tgt_poll_group_000", 00:21:05.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:05.603 "listen_address": { 00:21:05.603 "trtype": "TCP", 00:21:05.603 "adrfam": "IPv4", 00:21:05.603 "traddr": "10.0.0.2", 00:21:05.603 "trsvcid": "4420" 00:21:05.603 }, 00:21:05.603 "peer_address": { 00:21:05.603 "trtype": "TCP", 00:21:05.603 "adrfam": "IPv4", 00:21:05.603 "traddr": "10.0.0.1", 00:21:05.603 "trsvcid": "43740" 00:21:05.603 }, 00:21:05.603 "auth": { 00:21:05.603 "state": "completed", 00:21:05.603 "digest": "sha512", 00:21:05.603 "dhgroup": "ffdhe2048" 00:21:05.603 } 00:21:05.603 } 00:21:05.603 ]' 00:21:05.603 04:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.860 04:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.860 04:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.860 04:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.860 04:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.860 04:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.860 04:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.860 04:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.119 04:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:21:06.119 04:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:21:07.492 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.492 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.492 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.492 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.492 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.492 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.492 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.492 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.493 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:07.493 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.493 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.493 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:07.493 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:07.493 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.493 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:07.493 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.493 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.493 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.493 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:07.493 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.493 04:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:08.059 00:21:08.059 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.059 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.059 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.059 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.059 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.059 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.059 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.059 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.059 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.059 { 00:21:08.059 "cntlid": 111, 00:21:08.059 "qid": 0, 00:21:08.059 "state": "enabled", 00:21:08.059 "thread": "nvmf_tgt_poll_group_000", 00:21:08.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:08.059 "listen_address": { 00:21:08.059 "trtype": "TCP", 00:21:08.059 "adrfam": "IPv4", 00:21:08.059 "traddr": "10.0.0.2", 00:21:08.059 "trsvcid": "4420" 00:21:08.059 }, 00:21:08.059 "peer_address": { 00:21:08.059 "trtype": "TCP", 00:21:08.059 "adrfam": "IPv4", 00:21:08.059 "traddr": "10.0.0.1", 00:21:08.059 "trsvcid": "43772" 00:21:08.059 }, 00:21:08.059 "auth": { 00:21:08.059 "state": "completed", 00:21:08.059 "digest": "sha512", 00:21:08.059 "dhgroup": "ffdhe2048" 00:21:08.059 } 00:21:08.059 } 00:21:08.059 ]' 00:21:08.317 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.317 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.317 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.317 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.317 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.317 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.317 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.317 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.575 04:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:21:08.575 04:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:21:09.509 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.509 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.509 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.509 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.509 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.509 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.509 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.509 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:09.510 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:09.767 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:09.767 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.767 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.767 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:09.767 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:09.767 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.767 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.767 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.767 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.025 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.025 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.025 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.025 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.284 00:21:10.284 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.284 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.284 04:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.543 04:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.543 04:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.543 04:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.543 04:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.543 04:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.543 04:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.543 { 00:21:10.543 "cntlid": 113, 00:21:10.543 "qid": 0, 00:21:10.543 "state": "enabled", 00:21:10.543 "thread": "nvmf_tgt_poll_group_000", 00:21:10.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:10.543 "listen_address": { 00:21:10.543 "trtype": "TCP", 00:21:10.543 "adrfam": "IPv4", 00:21:10.543 "traddr": "10.0.0.2", 00:21:10.543 "trsvcid": "4420" 00:21:10.543 }, 00:21:10.543 "peer_address": { 00:21:10.543 "trtype": "TCP", 00:21:10.543 "adrfam": "IPv4", 00:21:10.543 "traddr": "10.0.0.1", 00:21:10.543 "trsvcid": "52090" 00:21:10.543 }, 00:21:10.543 "auth": { 00:21:10.543 "state": "completed", 00:21:10.543 "digest": "sha512", 00:21:10.543 "dhgroup": "ffdhe3072" 00:21:10.543 } 00:21:10.543 } 00:21:10.543 ]' 00:21:10.543 04:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.543 04:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.543 04:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.543 04:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.543 04:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.802 04:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.802 04:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.802 04:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.061 04:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:21:11.061 04:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:21:11.997 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.997 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.997 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.997 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.997 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.997 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.997 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.997 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.256 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:12.256 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.256 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.256 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:12.256 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:12.256 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.256 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.256 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.256 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.256 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.256 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.256 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.256 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.514 00:21:12.772 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.772 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.772 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.030 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.030 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.030 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.031 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.031 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.031 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.031 { 00:21:13.031 "cntlid": 115, 00:21:13.031 "qid": 0, 00:21:13.031 "state": "enabled", 00:21:13.031 "thread": "nvmf_tgt_poll_group_000", 00:21:13.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:13.031 "listen_address": { 00:21:13.031 "trtype": "TCP", 00:21:13.031 "adrfam": "IPv4", 00:21:13.031 "traddr": "10.0.0.2", 00:21:13.031 "trsvcid": "4420" 00:21:13.031 }, 00:21:13.031 "peer_address": { 00:21:13.031 "trtype": "TCP", 00:21:13.031 "adrfam": "IPv4", 00:21:13.031 "traddr": "10.0.0.1", 00:21:13.031 "trsvcid": "52112" 00:21:13.031 }, 00:21:13.031 "auth": { 00:21:13.031 "state": "completed", 00:21:13.031 "digest": "sha512", 00:21:13.031 "dhgroup": "ffdhe3072" 00:21:13.031 } 00:21:13.031 } 00:21:13.031 ]' 00:21:13.031 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.031 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.031 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.031 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.031 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.031 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.031 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.031 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.289 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:21:13.289 04:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:21:14.224 04:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.224 04:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.224 04:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.224 04:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.224 04:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.224 04:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.224 04:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.224 04:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.482 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:14.482 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.482 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.482 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:14.482 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:14.482 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.482 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.482 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.482 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.482 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.482 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.482 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.482 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.048 00:21:15.048 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.048 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.048 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.306 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.306 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.306 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.306 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.306 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.306 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.306 { 00:21:15.306 "cntlid": 117, 00:21:15.306 "qid": 0, 00:21:15.306 "state": "enabled", 00:21:15.306 "thread": "nvmf_tgt_poll_group_000", 00:21:15.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:15.306 "listen_address": { 00:21:15.306 "trtype": "TCP", 00:21:15.306 "adrfam": "IPv4", 00:21:15.306 "traddr": "10.0.0.2", 00:21:15.306 "trsvcid": "4420" 00:21:15.306 }, 00:21:15.306 "peer_address": { 00:21:15.306 "trtype": "TCP", 00:21:15.306 "adrfam": "IPv4", 00:21:15.306 "traddr": "10.0.0.1", 00:21:15.306 "trsvcid": "52146" 00:21:15.306 }, 00:21:15.306 "auth": { 00:21:15.306 "state": "completed", 00:21:15.306 "digest": "sha512", 00:21:15.306 "dhgroup": "ffdhe3072" 00:21:15.306 } 00:21:15.306 } 00:21:15.306 ]' 00:21:15.306 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.306 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.306 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.306 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:15.306 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.306 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.306 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.306 04:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.565 04:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:21:15.565 04:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:21:16.500 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.500 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.500 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.500 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.757 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.757 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.757 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:16.757 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.015 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:17.015 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.015 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.015 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:17.015 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:17.015 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.015 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:17.015 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.015 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.015 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.015 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.015 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.015 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.274 00:21:17.274 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.274 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.274 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.532 04:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.532 04:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.532 04:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.532 04:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.532 04:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.532 04:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.532 { 00:21:17.532 "cntlid": 119, 00:21:17.532 "qid": 0, 00:21:17.532 "state": "enabled", 00:21:17.532 "thread": "nvmf_tgt_poll_group_000", 00:21:17.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:17.532 "listen_address": { 00:21:17.532 "trtype": "TCP", 00:21:17.532 "adrfam": "IPv4", 00:21:17.532 "traddr": "10.0.0.2", 00:21:17.532 "trsvcid": "4420" 00:21:17.532 }, 00:21:17.532 "peer_address": { 00:21:17.532 "trtype": "TCP", 00:21:17.532 "adrfam": "IPv4", 00:21:17.532 "traddr": "10.0.0.1", 00:21:17.532 "trsvcid": "52174" 00:21:17.532 }, 00:21:17.532 "auth": { 00:21:17.532 "state": "completed", 00:21:17.532 "digest": "sha512", 00:21:17.532 "dhgroup": "ffdhe3072" 00:21:17.532 } 00:21:17.532 } 00:21:17.532 ]' 00:21:17.532 04:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.532 04:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.532 04:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.790 04:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:17.790 04:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.790 04:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.790 04:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.790 04:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.049 04:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:21:18.049 04:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:21:18.983 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.983 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.983 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.983 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.983 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.983 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.983 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.983 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.983 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.240 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:19.240 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.240 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.240 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:19.240 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:19.240 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.240 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.240 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.240 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.240 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.240 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.240 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.240 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.857 00:21:19.857 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.857 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.857 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.114 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.114 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.114 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.114 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.114 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.114 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.114 { 00:21:20.114 "cntlid": 121, 00:21:20.114 "qid": 0, 00:21:20.114 "state": "enabled", 00:21:20.114 "thread": "nvmf_tgt_poll_group_000", 00:21:20.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:20.114 "listen_address": { 00:21:20.114 "trtype": "TCP", 00:21:20.115 "adrfam": "IPv4", 00:21:20.115 "traddr": "10.0.0.2", 00:21:20.115 "trsvcid": "4420" 00:21:20.115 }, 00:21:20.115 "peer_address": { 00:21:20.115 "trtype": "TCP", 00:21:20.115 "adrfam": "IPv4", 00:21:20.115 "traddr": "10.0.0.1", 00:21:20.115 "trsvcid": "36544" 00:21:20.115 }, 00:21:20.115 "auth": { 00:21:20.115 "state": "completed", 00:21:20.115 "digest": "sha512", 00:21:20.115 "dhgroup": "ffdhe4096" 00:21:20.115 } 00:21:20.115 } 00:21:20.115 ]' 00:21:20.115 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.115 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.115 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.115 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:20.115 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.115 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.115 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.115 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.680 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:21:20.680 04:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:21:21.616 04:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.616 04:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.616 04:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.616 04:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.616 04:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.616 04:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.616 04:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:21.616 04:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:21.873 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:21.873 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.873 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.873 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:21.873 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:21.873 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.873 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.873 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.873 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.873 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.873 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.873 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.873 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.130 00:21:22.130 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.131 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.131 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.695 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.695 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.695 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.695 04:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.695 04:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.695 04:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.695 { 00:21:22.695 "cntlid": 123, 00:21:22.695 "qid": 0, 00:21:22.695 "state": "enabled", 00:21:22.695 "thread": "nvmf_tgt_poll_group_000", 00:21:22.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:22.695 "listen_address": { 00:21:22.695 "trtype": "TCP", 00:21:22.695 "adrfam": "IPv4", 00:21:22.695 "traddr": "10.0.0.2", 00:21:22.695 "trsvcid": "4420" 00:21:22.695 }, 00:21:22.695 "peer_address": { 00:21:22.695 "trtype": "TCP", 00:21:22.695 "adrfam": "IPv4", 00:21:22.695 "traddr": "10.0.0.1", 00:21:22.695 "trsvcid": "36562" 00:21:22.695 }, 00:21:22.695 "auth": { 00:21:22.695 "state": "completed", 00:21:22.695 "digest": "sha512", 00:21:22.695 "dhgroup": "ffdhe4096" 00:21:22.695 } 00:21:22.695 } 00:21:22.695 ]' 00:21:22.695 04:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.695 04:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.695 04:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.695 04:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:22.695 04:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.695 04:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.695 04:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.695 04:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.953 04:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:21:22.953 04:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:21:23.887 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.887 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.887 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.887 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.887 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.887 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.887 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:23.887 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:24.145 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:24.145 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.145 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.145 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:24.145 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:24.145 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.145 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.145 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.145 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.145 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.145 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.145 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.145 04:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.710 00:21:24.710 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.710 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.710 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.968 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.968 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.968 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.968 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.968 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.968 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.968 { 00:21:24.968 "cntlid": 125, 00:21:24.968 "qid": 0, 00:21:24.968 "state": "enabled", 00:21:24.968 "thread": "nvmf_tgt_poll_group_000", 00:21:24.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.968 "listen_address": { 00:21:24.968 "trtype": "TCP", 00:21:24.968 "adrfam": "IPv4", 00:21:24.968 "traddr": "10.0.0.2", 00:21:24.968 "trsvcid": "4420" 00:21:24.968 }, 00:21:24.968 "peer_address": { 00:21:24.968 "trtype": "TCP", 00:21:24.968 "adrfam": "IPv4", 00:21:24.968 "traddr": "10.0.0.1", 00:21:24.968 "trsvcid": "36582" 00:21:24.968 }, 00:21:24.968 "auth": { 00:21:24.968 "state": "completed", 00:21:24.968 "digest": "sha512", 00:21:24.968 "dhgroup": "ffdhe4096" 00:21:24.968 } 00:21:24.968 } 00:21:24.968 ]' 00:21:24.968 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.969 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.969 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.969 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:24.969 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.969 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.969 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.969 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.227 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:21:25.227 04:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:21:26.600 04:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.600 04:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.600 04:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.600 04:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.600 04:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.600 04:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.600 04:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.600 04:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.600 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:26.600 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.600 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.600 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:26.600 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.600 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.600 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:26.600 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.600 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.600 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.600 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.600 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.600 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.166 00:21:27.166 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.166 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.166 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.424 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.424 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.424 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.424 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.424 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.424 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.424 { 00:21:27.424 "cntlid": 127, 00:21:27.424 "qid": 0, 00:21:27.424 "state": "enabled", 00:21:27.424 "thread": "nvmf_tgt_poll_group_000", 00:21:27.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.424 "listen_address": { 00:21:27.424 "trtype": "TCP", 00:21:27.424 "adrfam": "IPv4", 00:21:27.424 "traddr": "10.0.0.2", 00:21:27.424 "trsvcid": "4420" 00:21:27.424 }, 00:21:27.424 "peer_address": { 00:21:27.424 "trtype": "TCP", 00:21:27.424 "adrfam": "IPv4", 00:21:27.424 "traddr": "10.0.0.1", 00:21:27.424 "trsvcid": "36612" 00:21:27.424 }, 00:21:27.424 "auth": { 00:21:27.424 "state": "completed", 00:21:27.424 "digest": "sha512", 00:21:27.424 "dhgroup": "ffdhe4096" 00:21:27.424 } 00:21:27.424 } 00:21:27.424 ]' 00:21:27.424 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.424 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.424 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.424 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.424 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.424 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.424 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.424 04:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.682 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:21:27.682 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:21:28.616 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.616 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.616 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.616 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.873 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.874 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.874 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.874 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.874 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:29.131 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:29.131 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.131 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.131 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:29.131 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:29.131 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.131 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.131 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.131 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.131 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.131 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.132 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.132 04:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.696 00:21:29.696 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.696 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.696 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.954 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.954 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.954 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.954 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.954 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.954 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.954 { 00:21:29.954 "cntlid": 129, 00:21:29.954 "qid": 0, 00:21:29.954 "state": "enabled", 00:21:29.954 "thread": "nvmf_tgt_poll_group_000", 00:21:29.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.954 "listen_address": { 00:21:29.954 "trtype": "TCP", 00:21:29.954 "adrfam": "IPv4", 00:21:29.954 "traddr": "10.0.0.2", 00:21:29.954 "trsvcid": "4420" 00:21:29.954 }, 00:21:29.954 "peer_address": { 00:21:29.954 "trtype": "TCP", 00:21:29.954 "adrfam": "IPv4", 00:21:29.954 "traddr": "10.0.0.1", 00:21:29.954 "trsvcid": "53492" 00:21:29.954 }, 00:21:29.954 "auth": { 00:21:29.954 "state": "completed", 00:21:29.954 "digest": "sha512", 00:21:29.954 "dhgroup": "ffdhe6144" 00:21:29.954 } 00:21:29.954 } 00:21:29.954 ]' 00:21:29.954 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.954 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.954 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.954 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.954 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.954 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.954 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.954 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.213 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:21:30.213 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:21:31.148 04:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.148 04:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.148 04:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.148 04:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.148 04:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.148 04:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.148 04:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:31.148 04:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:31.714 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:31.714 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.714 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.714 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:31.714 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:31.714 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.714 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.714 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.714 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.714 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.714 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.714 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.714 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.281 00:21:32.281 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.281 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.281 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.540 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.540 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.540 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.540 04:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.540 04:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.540 04:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.540 { 00:21:32.540 "cntlid": 131, 00:21:32.540 "qid": 0, 00:21:32.540 "state": "enabled", 00:21:32.540 "thread": "nvmf_tgt_poll_group_000", 00:21:32.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:32.540 "listen_address": { 00:21:32.540 "trtype": "TCP", 00:21:32.540 "adrfam": "IPv4", 00:21:32.540 "traddr": "10.0.0.2", 00:21:32.540 "trsvcid": "4420" 00:21:32.540 }, 00:21:32.540 "peer_address": { 00:21:32.540 "trtype": "TCP", 00:21:32.540 "adrfam": "IPv4", 00:21:32.540 "traddr": "10.0.0.1", 00:21:32.540 "trsvcid": "53516" 00:21:32.540 }, 00:21:32.540 "auth": { 00:21:32.540 "state": "completed", 00:21:32.540 "digest": "sha512", 00:21:32.540 "dhgroup": "ffdhe6144" 00:21:32.540 } 00:21:32.540 } 00:21:32.540 ]' 00:21:32.540 04:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.540 04:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.540 04:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.540 04:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:32.540 04:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.540 04:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.540 04:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.540 04:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.106 04:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:21:33.106 04:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:21:34.040 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.040 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.040 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.040 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.040 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.040 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.040 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:34.040 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:34.298 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:34.298 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.298 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.298 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:34.298 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:34.298 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.298 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.298 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.298 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.298 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.298 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.298 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.298 04:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.862 00:21:34.862 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.862 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.862 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.119 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.119 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.119 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.119 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.119 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.119 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.119 { 00:21:35.119 "cntlid": 133, 00:21:35.119 "qid": 0, 00:21:35.119 "state": "enabled", 00:21:35.119 "thread": "nvmf_tgt_poll_group_000", 00:21:35.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:35.119 "listen_address": { 00:21:35.119 "trtype": "TCP", 00:21:35.119 "adrfam": "IPv4", 00:21:35.119 "traddr": "10.0.0.2", 00:21:35.119 "trsvcid": "4420" 00:21:35.119 }, 00:21:35.119 "peer_address": { 00:21:35.119 "trtype": "TCP", 00:21:35.119 "adrfam": "IPv4", 00:21:35.119 "traddr": "10.0.0.1", 00:21:35.119 "trsvcid": "53542" 00:21:35.119 }, 00:21:35.119 "auth": { 00:21:35.119 "state": "completed", 00:21:35.119 "digest": "sha512", 00:21:35.119 "dhgroup": "ffdhe6144" 00:21:35.119 } 00:21:35.119 } 00:21:35.119 ]' 00:21:35.119 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.119 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.119 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.119 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:35.119 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.119 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.119 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.119 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.682 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:21:35.682 04:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:21:36.614 04:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.614 04:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.614 04:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.614 04:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.614 04:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.614 04:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.614 04:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.614 04:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.871 04:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:36.871 04:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.871 04:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.871 04:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:36.871 04:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:36.871 04:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.871 04:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:36.871 04:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.871 04:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.871 04:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.871 04:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:36.871 04:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.871 04:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.435 00:21:37.435 04:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.435 04:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.435 04:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.693 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.693 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.693 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.693 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.693 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.693 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.693 { 00:21:37.693 "cntlid": 135, 00:21:37.693 "qid": 0, 00:21:37.693 "state": "enabled", 00:21:37.693 "thread": "nvmf_tgt_poll_group_000", 00:21:37.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:37.693 "listen_address": { 00:21:37.693 "trtype": "TCP", 00:21:37.693 "adrfam": "IPv4", 00:21:37.693 "traddr": "10.0.0.2", 00:21:37.693 "trsvcid": "4420" 00:21:37.693 }, 00:21:37.693 "peer_address": { 00:21:37.693 "trtype": "TCP", 00:21:37.693 "adrfam": "IPv4", 00:21:37.693 "traddr": "10.0.0.1", 00:21:37.693 "trsvcid": "53578" 00:21:37.693 }, 00:21:37.693 "auth": { 00:21:37.693 "state": "completed", 00:21:37.693 "digest": "sha512", 00:21:37.693 "dhgroup": "ffdhe6144" 00:21:37.693 } 00:21:37.693 } 00:21:37.693 ]' 00:21:37.693 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.950 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.950 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.950 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:37.950 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.950 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.950 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.950 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.207 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:21:38.208 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:21:39.143 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.143 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.143 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.143 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.143 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.143 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:39.143 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.143 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:39.143 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:39.400 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:39.400 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.400 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.400 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:39.400 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:39.400 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.400 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.400 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.400 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.400 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.400 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.401 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.401 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.334 00:21:40.334 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.334 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.334 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.592 04:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.592 04:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.592 04:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.592 04:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.850 04:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.850 04:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.850 { 00:21:40.850 "cntlid": 137, 00:21:40.850 "qid": 0, 00:21:40.850 "state": "enabled", 00:21:40.850 "thread": "nvmf_tgt_poll_group_000", 00:21:40.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:40.850 "listen_address": { 00:21:40.850 "trtype": "TCP", 00:21:40.850 "adrfam": "IPv4", 00:21:40.850 "traddr": "10.0.0.2", 00:21:40.850 "trsvcid": "4420" 00:21:40.850 }, 00:21:40.850 "peer_address": { 00:21:40.850 "trtype": "TCP", 00:21:40.850 "adrfam": "IPv4", 00:21:40.850 "traddr": "10.0.0.1", 00:21:40.850 "trsvcid": "50684" 00:21:40.850 }, 00:21:40.850 "auth": { 00:21:40.850 "state": "completed", 00:21:40.850 "digest": "sha512", 00:21:40.850 "dhgroup": "ffdhe8192" 00:21:40.850 } 00:21:40.850 } 00:21:40.850 ]' 00:21:40.850 04:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.850 04:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.850 04:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.850 04:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:40.850 04:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.850 04:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.850 04:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.850 04:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.107 04:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:21:41.108 04:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:21:42.041 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.041 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.041 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.041 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.041 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.041 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.041 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.041 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.299 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:42.299 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.299 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.299 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:42.299 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:42.299 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.299 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.299 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.299 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.299 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.299 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.299 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.299 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.234 00:21:43.234 04:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.234 04:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.234 04:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.492 04:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.492 04:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.492 04:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.492 04:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.492 04:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.492 04:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.492 { 00:21:43.492 "cntlid": 139, 00:21:43.492 "qid": 0, 00:21:43.492 "state": "enabled", 00:21:43.492 "thread": "nvmf_tgt_poll_group_000", 00:21:43.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:43.492 "listen_address": { 00:21:43.492 "trtype": "TCP", 00:21:43.492 "adrfam": "IPv4", 00:21:43.492 "traddr": "10.0.0.2", 00:21:43.492 "trsvcid": "4420" 00:21:43.492 }, 00:21:43.492 "peer_address": { 00:21:43.492 "trtype": "TCP", 00:21:43.492 "adrfam": "IPv4", 00:21:43.492 "traddr": "10.0.0.1", 00:21:43.492 "trsvcid": "50706" 00:21:43.492 }, 00:21:43.492 "auth": { 00:21:43.492 "state": "completed", 00:21:43.492 "digest": "sha512", 00:21:43.492 "dhgroup": "ffdhe8192" 00:21:43.492 } 00:21:43.492 } 00:21:43.492 ]' 00:21:43.492 04:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.749 04:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.749 04:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.749 04:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:43.749 04:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.749 04:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.749 04:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.749 04:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.007 04:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:21:44.007 04:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: --dhchap-ctrl-secret DHHC-1:02:ZjMxMDkzMjI1ZDMxZmYyZWY0MzAxZjUwZTQ4ZTIzZDNiNWIzNGIwMzE4ZWU4NDk5TdBPqw==: 00:21:44.940 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.940 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.940 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.940 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.940 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.940 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.940 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:44.940 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:45.506 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:45.506 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.506 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.506 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:45.506 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:45.506 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.506 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.506 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.506 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.506 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.506 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.506 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.506 04:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.440 00:21:46.440 04:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.440 04:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.440 04:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.440 04:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.440 04:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.440 04:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.440 04:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.440 04:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.440 04:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.440 { 00:21:46.440 "cntlid": 141, 00:21:46.440 "qid": 0, 00:21:46.440 "state": "enabled", 00:21:46.440 "thread": "nvmf_tgt_poll_group_000", 00:21:46.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:46.440 "listen_address": { 00:21:46.440 "trtype": "TCP", 00:21:46.440 "adrfam": "IPv4", 00:21:46.440 "traddr": "10.0.0.2", 00:21:46.440 "trsvcid": "4420" 00:21:46.440 }, 00:21:46.440 "peer_address": { 00:21:46.440 "trtype": "TCP", 00:21:46.440 "adrfam": "IPv4", 00:21:46.440 "traddr": "10.0.0.1", 00:21:46.440 "trsvcid": "50734" 00:21:46.440 }, 00:21:46.440 "auth": { 00:21:46.440 "state": "completed", 00:21:46.440 "digest": "sha512", 00:21:46.440 "dhgroup": "ffdhe8192" 00:21:46.440 } 00:21:46.440 } 00:21:46.440 ]' 00:21:46.440 04:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.440 04:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.440 04:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.698 04:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:46.698 04:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.698 04:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.698 04:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.698 04:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.957 04:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:21:46.957 04:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:01:ZWUwNTUzY2MwOWI0NzA4MDcwYmQ0MzMyNTFjZDY5MDGswfE9: 00:21:47.891 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.891 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.891 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.891 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.891 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.891 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.891 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:47.891 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.149 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:48.149 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.149 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.149 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:48.149 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:48.149 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.149 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:48.149 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.149 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.149 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.149 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:48.149 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.149 04:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.084 00:21:49.084 04:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.084 04:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.084 04:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.695 04:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.695 04:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.695 04:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.695 04:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.695 04:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.695 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.695 { 00:21:49.695 "cntlid": 143, 00:21:49.695 "qid": 0, 00:21:49.695 "state": "enabled", 00:21:49.695 "thread": "nvmf_tgt_poll_group_000", 00:21:49.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:49.695 "listen_address": { 00:21:49.695 "trtype": "TCP", 00:21:49.695 "adrfam": "IPv4", 00:21:49.695 "traddr": "10.0.0.2", 00:21:49.695 "trsvcid": "4420" 00:21:49.695 }, 00:21:49.695 "peer_address": { 00:21:49.695 "trtype": "TCP", 00:21:49.695 "adrfam": "IPv4", 00:21:49.695 "traddr": "10.0.0.1", 00:21:49.695 "trsvcid": "43238" 00:21:49.695 }, 00:21:49.695 "auth": { 00:21:49.695 "state": "completed", 00:21:49.695 "digest": "sha512", 00:21:49.695 "dhgroup": "ffdhe8192" 00:21:49.695 } 00:21:49.695 } 00:21:49.695 ]' 00:21:49.695 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.695 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.695 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.695 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.695 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.695 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.695 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.695 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.016 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:21:50.017 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:21:50.963 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.963 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.963 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.963 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.963 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.963 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:50.963 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:50.963 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:50.963 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:50.963 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:50.963 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:51.221 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:51.221 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.221 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.221 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:51.221 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:51.221 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.221 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.221 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.221 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.221 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.221 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.221 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.221 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.155 00:21:52.155 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.155 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.155 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.413 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.413 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.413 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.413 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.413 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.413 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.413 { 00:21:52.413 "cntlid": 145, 00:21:52.413 "qid": 0, 00:21:52.413 "state": "enabled", 00:21:52.413 "thread": "nvmf_tgt_poll_group_000", 00:21:52.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:52.413 "listen_address": { 00:21:52.413 "trtype": "TCP", 00:21:52.413 "adrfam": "IPv4", 00:21:52.413 "traddr": "10.0.0.2", 00:21:52.413 "trsvcid": "4420" 00:21:52.413 }, 00:21:52.413 "peer_address": { 00:21:52.413 "trtype": "TCP", 00:21:52.413 "adrfam": "IPv4", 00:21:52.413 "traddr": "10.0.0.1", 00:21:52.413 "trsvcid": "43258" 00:21:52.413 }, 00:21:52.413 "auth": { 00:21:52.413 "state": "completed", 00:21:52.413 "digest": "sha512", 00:21:52.413 "dhgroup": "ffdhe8192" 00:21:52.413 } 00:21:52.413 } 00:21:52.413 ]' 00:21:52.413 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.413 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.413 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.413 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.413 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.413 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.413 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.413 04:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.671 04:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:21:52.671 04:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZWM0MTM4ZWVmMjY2MTMyYzgwMzY4NWQxNDJmMjVmYjcwODQyZjM3NWJjNjAxNGUyMQby6g==: --dhchap-ctrl-secret DHHC-1:03:MzFhNzRlNDZlZDcxOWM5YjdlNjRjZDRmNDIwNjJlN2Q5NGU0Mjc3MTRmZWZmZmZkMWJmOThmYmNiN2VhOWNkY27RsBs=: 00:21:54.043 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.043 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.043 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.043 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.043 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.043 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:54.043 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.043 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.043 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.043 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:54.043 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:54.044 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:54.044 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:54.044 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:54.044 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:54.044 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:54.044 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:54.044 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:54.044 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:54.609 request: 00:21:54.609 { 00:21:54.609 "name": "nvme0", 00:21:54.609 "trtype": "tcp", 00:21:54.609 "traddr": "10.0.0.2", 00:21:54.609 "adrfam": "ipv4", 00:21:54.609 "trsvcid": "4420", 00:21:54.609 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:54.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:54.609 "prchk_reftag": false, 00:21:54.609 "prchk_guard": false, 00:21:54.609 "hdgst": false, 00:21:54.609 "ddgst": false, 00:21:54.609 "dhchap_key": "key2", 00:21:54.609 "allow_unrecognized_csi": false, 00:21:54.609 "method": "bdev_nvme_attach_controller", 00:21:54.609 "req_id": 1 00:21:54.609 } 00:21:54.609 Got JSON-RPC error response 00:21:54.609 response: 00:21:54.609 { 00:21:54.609 "code": -5, 00:21:54.609 "message": "Input/output error" 00:21:54.609 } 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:54.609 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:54.610 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:54.610 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:54.610 04:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:55.542 request: 00:21:55.542 { 00:21:55.542 "name": "nvme0", 00:21:55.542 "trtype": "tcp", 00:21:55.542 "traddr": "10.0.0.2", 00:21:55.542 "adrfam": "ipv4", 00:21:55.542 "trsvcid": "4420", 00:21:55.542 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:55.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.542 "prchk_reftag": false, 00:21:55.542 "prchk_guard": false, 00:21:55.542 "hdgst": false, 00:21:55.542 "ddgst": false, 00:21:55.542 "dhchap_key": "key1", 00:21:55.542 "dhchap_ctrlr_key": "ckey2", 00:21:55.542 "allow_unrecognized_csi": false, 00:21:55.542 "method": "bdev_nvme_attach_controller", 00:21:55.542 "req_id": 1 00:21:55.542 } 00:21:55.542 Got JSON-RPC error response 00:21:55.542 response: 00:21:55.542 { 00:21:55.542 "code": -5, 00:21:55.542 "message": "Input/output error" 00:21:55.542 } 00:21:55.542 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:55.542 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:55.542 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:55.542 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:55.542 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.542 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.542 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.542 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.542 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:55.543 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.543 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.543 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.543 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.543 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:55.543 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.543 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:55.543 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.543 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:55.543 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.543 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.543 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.543 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.476 request: 00:21:56.476 { 00:21:56.476 "name": "nvme0", 00:21:56.476 "trtype": "tcp", 00:21:56.476 "traddr": "10.0.0.2", 00:21:56.476 "adrfam": "ipv4", 00:21:56.476 "trsvcid": "4420", 00:21:56.476 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:56.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:56.476 "prchk_reftag": false, 00:21:56.476 "prchk_guard": false, 00:21:56.476 "hdgst": false, 00:21:56.476 "ddgst": false, 00:21:56.476 "dhchap_key": "key1", 00:21:56.476 "dhchap_ctrlr_key": "ckey1", 00:21:56.476 "allow_unrecognized_csi": false, 00:21:56.476 "method": "bdev_nvme_attach_controller", 00:21:56.476 "req_id": 1 00:21:56.476 } 00:21:56.476 Got JSON-RPC error response 00:21:56.476 response: 00:21:56.476 { 00:21:56.476 "code": -5, 00:21:56.476 "message": "Input/output error" 00:21:56.476 } 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2318824 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2318824 ']' 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2318824 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2318824 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2318824' 00:21:56.476 killing process with pid 2318824 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2318824 00:21:56.476 04:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2318824 00:21:56.735 04:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:56.735 04:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:56.735 04:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:56.735 04:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.735 04:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=2341668 00:21:56.735 04:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:56.735 04:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 2341668 00:21:56.735 04:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2341668 ']' 00:21:56.735 04:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.735 04:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:56.735 04:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.735 04:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:56.735 04:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.679 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:57.679 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:57.679 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:57.679 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:57.679 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.937 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.937 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:57.937 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2341668 00:21:57.937 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2341668 ']' 00:21:57.937 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.937 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:57.937 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.937 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:57.937 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.195 null0 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Idp 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.1lC ]] 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1lC 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.pQB 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.StY ]] 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.StY 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.cMf 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.iR5 ]] 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iR5 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.cMc 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.195 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.196 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.196 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:58.196 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:58.196 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.196 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:58.196 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:58.196 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:58.196 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.196 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:58.196 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.196 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.453 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.453 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:58.453 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.453 04:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.826 nvme0n1 00:21:59.826 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.826 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.826 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.084 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.084 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.084 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.084 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.084 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.084 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.084 { 00:22:00.084 "cntlid": 1, 00:22:00.084 "qid": 0, 00:22:00.084 "state": "enabled", 00:22:00.084 "thread": "nvmf_tgt_poll_group_000", 00:22:00.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:00.084 "listen_address": { 00:22:00.084 "trtype": "TCP", 00:22:00.084 "adrfam": "IPv4", 00:22:00.084 "traddr": "10.0.0.2", 00:22:00.084 "trsvcid": "4420" 00:22:00.084 }, 00:22:00.084 "peer_address": { 00:22:00.084 "trtype": "TCP", 00:22:00.084 "adrfam": "IPv4", 00:22:00.084 "traddr": "10.0.0.1", 00:22:00.084 "trsvcid": "49170" 00:22:00.084 }, 00:22:00.084 "auth": { 00:22:00.084 "state": "completed", 00:22:00.084 "digest": "sha512", 00:22:00.084 "dhgroup": "ffdhe8192" 00:22:00.084 } 00:22:00.084 } 00:22:00.084 ]' 00:22:00.084 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.084 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.084 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.342 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.342 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.342 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.342 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.342 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.601 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:22:00.601 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:22:01.534 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.534 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.534 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.534 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.534 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.534 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:01.534 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.534 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.534 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.534 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:01.534 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:01.791 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:01.791 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:01.791 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:01.791 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:01.791 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.791 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:01.791 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.791 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:01.791 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.791 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.047 request: 00:22:02.047 { 00:22:02.047 "name": "nvme0", 00:22:02.047 "trtype": "tcp", 00:22:02.047 "traddr": "10.0.0.2", 00:22:02.047 "adrfam": "ipv4", 00:22:02.047 "trsvcid": "4420", 00:22:02.047 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:02.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:02.047 "prchk_reftag": false, 00:22:02.047 "prchk_guard": false, 00:22:02.047 "hdgst": false, 00:22:02.047 "ddgst": false, 00:22:02.047 "dhchap_key": "key3", 00:22:02.047 "allow_unrecognized_csi": false, 00:22:02.047 "method": "bdev_nvme_attach_controller", 00:22:02.047 "req_id": 1 00:22:02.047 } 00:22:02.047 Got JSON-RPC error response 00:22:02.047 response: 00:22:02.047 { 00:22:02.047 "code": -5, 00:22:02.047 "message": "Input/output error" 00:22:02.047 } 00:22:02.047 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:02.047 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:02.047 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:02.047 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:02.047 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:02.047 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:02.047 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:02.047 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:02.305 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:02.305 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:02.305 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:02.305 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:02.305 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.305 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:02.305 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.305 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.305 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.305 04:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.562 request: 00:22:02.562 { 00:22:02.562 "name": "nvme0", 00:22:02.562 "trtype": "tcp", 00:22:02.562 "traddr": "10.0.0.2", 00:22:02.562 "adrfam": "ipv4", 00:22:02.562 "trsvcid": "4420", 00:22:02.562 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:02.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:02.562 "prchk_reftag": false, 00:22:02.562 "prchk_guard": false, 00:22:02.562 "hdgst": false, 00:22:02.562 "ddgst": false, 00:22:02.562 "dhchap_key": "key3", 00:22:02.562 "allow_unrecognized_csi": false, 00:22:02.562 "method": "bdev_nvme_attach_controller", 00:22:02.562 "req_id": 1 00:22:02.562 } 00:22:02.562 Got JSON-RPC error response 00:22:02.562 response: 00:22:02.562 { 00:22:02.562 "code": -5, 00:22:02.562 "message": "Input/output error" 00:22:02.562 } 00:22:02.562 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:02.562 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:02.562 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:02.562 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:02.562 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:02.562 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:02.562 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:02.562 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.562 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.562 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.819 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.819 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.819 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.819 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.819 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.819 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.819 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.077 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.077 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:03.077 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:03.077 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:03.077 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:03.077 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.077 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:03.077 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.077 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:03.077 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:03.077 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:03.642 request: 00:22:03.642 { 00:22:03.642 "name": "nvme0", 00:22:03.642 "trtype": "tcp", 00:22:03.642 "traddr": "10.0.0.2", 00:22:03.642 "adrfam": "ipv4", 00:22:03.642 "trsvcid": "4420", 00:22:03.642 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:03.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:03.643 "prchk_reftag": false, 00:22:03.643 "prchk_guard": false, 00:22:03.643 "hdgst": false, 00:22:03.643 "ddgst": false, 00:22:03.643 "dhchap_key": "key0", 00:22:03.643 "dhchap_ctrlr_key": "key1", 00:22:03.643 "allow_unrecognized_csi": false, 00:22:03.643 "method": "bdev_nvme_attach_controller", 00:22:03.643 "req_id": 1 00:22:03.643 } 00:22:03.643 Got JSON-RPC error response 00:22:03.643 response: 00:22:03.643 { 00:22:03.643 "code": -5, 00:22:03.643 "message": "Input/output error" 00:22:03.643 } 00:22:03.643 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:03.643 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:03.643 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:03.643 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:03.643 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:03.643 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:03.643 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:03.900 nvme0n1 00:22:03.900 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:03.900 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.900 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:04.158 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.158 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.158 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.416 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:04.416 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.416 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.416 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.416 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:04.416 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:04.416 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:06.317 nvme0n1 00:22:06.317 04:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:06.317 04:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:06.317 04:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.317 04:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.317 04:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:06.317 04:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.317 04:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.317 04:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.317 04:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:06.317 04:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:06.317 04:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.575 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.575 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:22:06.575 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: --dhchap-ctrl-secret DHHC-1:03:ZjEwMDc5ZWYwYzBiODMyMTQwZmU5ZGI2OWZhZWQ3NjlkOWVmZWUxYjc2MDViMDE2MGQyOWJkNDNlMjM3NmM3M0L66DE=: 00:22:07.508 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:07.508 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:07.508 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:07.508 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:07.508 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:07.508 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:07.508 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:07.508 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.508 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.766 04:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:07.766 04:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:07.766 04:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:07.766 04:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:07.766 04:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.766 04:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:07.766 04:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.766 04:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:07.766 04:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:07.766 04:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:08.699 request: 00:22:08.699 { 00:22:08.699 "name": "nvme0", 00:22:08.699 "trtype": "tcp", 00:22:08.699 "traddr": "10.0.0.2", 00:22:08.699 "adrfam": "ipv4", 00:22:08.699 "trsvcid": "4420", 00:22:08.699 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:08.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:08.699 "prchk_reftag": false, 00:22:08.699 "prchk_guard": false, 00:22:08.699 "hdgst": false, 00:22:08.699 "ddgst": false, 00:22:08.699 "dhchap_key": "key1", 00:22:08.699 "allow_unrecognized_csi": false, 00:22:08.699 "method": "bdev_nvme_attach_controller", 00:22:08.700 "req_id": 1 00:22:08.700 } 00:22:08.700 Got JSON-RPC error response 00:22:08.700 response: 00:22:08.700 { 00:22:08.700 "code": -5, 00:22:08.700 "message": "Input/output error" 00:22:08.700 } 00:22:08.700 04:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:08.700 04:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:08.700 04:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:08.700 04:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:08.700 04:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:08.700 04:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:08.700 04:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:10.080 nvme0n1 00:22:10.080 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:10.080 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.080 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:10.647 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.647 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.647 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.904 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.904 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.904 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.904 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.904 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:10.904 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:10.904 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:11.161 nvme0n1 00:22:11.161 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:11.161 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.161 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:11.418 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.418 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.418 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.676 04:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:11.676 04:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.676 04:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.676 04:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.676 04:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: '' 2s 00:22:11.676 04:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:11.676 04:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:11.676 04:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: 00:22:11.676 04:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:11.676 04:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:11.676 04:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:11.676 04:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: ]] 00:22:11.676 04:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NTNkZDFhZTIxNjY4MmMyNDAwMjNhMzk4YTIxYmM0MjHQZhHu: 00:22:11.676 04:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:11.676 04:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:11.676 04:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: 2s 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: ]] 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NzU1MmI3NGRmNzk4NDEyYWJlYTcyNGU3YjZmZmVjMzQ3MGE4MGMzZWUzM2M2NzI109Um7w==: 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:14.203 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:16.102 04:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:16.102 04:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:16.102 04:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:16.102 04:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:16.102 04:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:16.102 04:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:16.102 04:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:16.102 04:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.102 04:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:16.102 04:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.102 04:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.102 04:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.102 04:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:16.102 04:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:16.102 04:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:17.475 nvme0n1 00:22:17.475 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:17.475 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.475 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.475 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.475 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:17.475 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:18.409 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:18.409 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:18.409 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.409 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.409 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.409 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.409 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.409 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.409 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:18.409 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:18.974 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:18.974 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:18.974 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.230 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.230 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:19.230 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.230 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.230 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.230 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:19.230 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:19.230 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:19.230 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:19.230 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.230 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:19.230 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.230 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:19.231 04:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:20.165 request: 00:22:20.165 { 00:22:20.165 "name": "nvme0", 00:22:20.165 "dhchap_key": "key1", 00:22:20.165 "dhchap_ctrlr_key": "key3", 00:22:20.165 "method": "bdev_nvme_set_keys", 00:22:20.165 "req_id": 1 00:22:20.165 } 00:22:20.165 Got JSON-RPC error response 00:22:20.165 response: 00:22:20.165 { 00:22:20.165 "code": -13, 00:22:20.165 "message": "Permission denied" 00:22:20.165 } 00:22:20.166 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:20.166 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:20.166 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:20.166 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:20.166 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:20.166 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.166 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:20.423 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:20.423 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:21.355 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:21.355 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:21.355 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.613 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:21.613 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:21.613 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.613 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.613 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.613 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:21.613 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:21.613 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:23.539 nvme0n1 00:22:23.539 04:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:23.539 04:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.539 04:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.539 04:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.539 04:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:23.539 04:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:23.539 04:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:23.539 04:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:23.539 04:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.539 04:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:23.539 04:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.539 04:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:23.539 04:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:24.133 request: 00:22:24.133 { 00:22:24.133 "name": "nvme0", 00:22:24.133 "dhchap_key": "key2", 00:22:24.133 "dhchap_ctrlr_key": "key0", 00:22:24.133 "method": "bdev_nvme_set_keys", 00:22:24.133 "req_id": 1 00:22:24.133 } 00:22:24.133 Got JSON-RPC error response 00:22:24.133 response: 00:22:24.133 { 00:22:24.133 "code": -13, 00:22:24.133 "message": "Permission denied" 00:22:24.133 } 00:22:24.133 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:24.133 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:24.133 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:24.133 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:24.133 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:24.133 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:24.133 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.390 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:24.390 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:25.324 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:25.324 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:25.324 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.891 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:25.891 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:25.891 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:25.891 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2318850 00:22:25.891 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2318850 ']' 00:22:25.891 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2318850 00:22:25.891 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:25.891 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:25.891 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2318850 00:22:25.891 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:25.891 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:25.891 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2318850' 00:22:25.891 killing process with pid 2318850 00:22:25.891 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2318850 00:22:25.891 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2318850 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:26.151 rmmod nvme_tcp 00:22:26.151 rmmod nvme_fabrics 00:22:26.151 rmmod nvme_keyring 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 2341668 ']' 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 2341668 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2341668 ']' 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2341668 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2341668 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2341668' 00:22:26.151 killing process with pid 2341668 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2341668 00:22:26.151 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2341668 00:22:26.410 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:26.410 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:26.410 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:26.410 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:26.410 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:22:26.410 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:26.410 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:22:26.410 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:26.410 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:26.410 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.410 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.410 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.946 04:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:28.946 04:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Idp /tmp/spdk.key-sha256.pQB /tmp/spdk.key-sha384.cMf /tmp/spdk.key-sha512.cMc /tmp/spdk.key-sha512.1lC /tmp/spdk.key-sha384.StY /tmp/spdk.key-sha256.iR5 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:28.946 00:22:28.946 real 3m41.891s 00:22:28.946 user 8m38.946s 00:22:28.946 sys 0m27.749s 00:22:28.946 04:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:28.946 04:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.946 ************************************ 00:22:28.946 END TEST nvmf_auth_target 00:22:28.946 ************************************ 00:22:28.946 04:58:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:28.946 04:58:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:28.946 04:58:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:28.947 ************************************ 00:22:28.947 START TEST nvmf_bdevio_no_huge 00:22:28.947 ************************************ 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:28.947 * Looking for test storage... 00:22:28.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1689 -- # lcov --version 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:28.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.947 --rc genhtml_branch_coverage=1 00:22:28.947 --rc genhtml_function_coverage=1 00:22:28.947 --rc genhtml_legend=1 00:22:28.947 --rc geninfo_all_blocks=1 00:22:28.947 --rc geninfo_unexecuted_blocks=1 00:22:28.947 00:22:28.947 ' 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:28.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.947 --rc genhtml_branch_coverage=1 00:22:28.947 --rc genhtml_function_coverage=1 00:22:28.947 --rc genhtml_legend=1 00:22:28.947 --rc geninfo_all_blocks=1 00:22:28.947 --rc geninfo_unexecuted_blocks=1 00:22:28.947 00:22:28.947 ' 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:28.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.947 --rc genhtml_branch_coverage=1 00:22:28.947 --rc genhtml_function_coverage=1 00:22:28.947 --rc genhtml_legend=1 00:22:28.947 --rc geninfo_all_blocks=1 00:22:28.947 --rc geninfo_unexecuted_blocks=1 00:22:28.947 00:22:28.947 ' 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:28.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.947 --rc genhtml_branch_coverage=1 00:22:28.947 --rc genhtml_function_coverage=1 00:22:28.947 --rc genhtml_legend=1 00:22:28.947 --rc geninfo_all_blocks=1 00:22:28.947 --rc geninfo_unexecuted_blocks=1 00:22:28.947 00:22:28.947 ' 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:28.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:28.947 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:28.948 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:28.948 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:28.948 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:28.948 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:28.948 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:28.948 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.948 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:28.948 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:28.948 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:28.948 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.948 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.948 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.948 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:28.948 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:28.948 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:28.948 04:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:30.853 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:30.853 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:30.853 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:30.854 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:30.854 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:30.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:22:30.854 00:22:30.854 --- 10.0.0.2 ping statistics --- 00:22:30.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.854 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:22:30.854 00:22:30.854 --- 10.0.0.1 ping statistics --- 00:22:30.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.854 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=2346981 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 2346981 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2346981 ']' 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:30.854 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.854 [2024-10-28 04:58:21.437199] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:22:30.854 [2024-10-28 04:58:21.437282] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:31.113 [2024-10-28 04:58:21.591355] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:31.113 [2024-10-28 04:58:21.623986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.113 [2024-10-28 04:58:21.677306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.113 [2024-10-28 04:58:21.677387] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.113 [2024-10-28 04:58:21.677404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.113 [2024-10-28 04:58:21.677418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.113 [2024-10-28 04:58:21.677430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.113 [2024-10-28 04:58:21.678592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:31.113 [2024-10-28 04:58:21.678658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:31.113 [2024-10-28 04:58:21.678716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:31.113 [2024-10-28 04:58:21.678720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.047 [2024-10-28 04:58:22.508017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.047 Malloc0 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.047 [2024-10-28 04:58:22.545817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:32.047 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:32.047 { 00:22:32.047 "params": { 00:22:32.047 "name": "Nvme$subsystem", 00:22:32.047 "trtype": "$TEST_TRANSPORT", 00:22:32.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.047 "adrfam": "ipv4", 00:22:32.047 "trsvcid": "$NVMF_PORT", 00:22:32.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.048 "hdgst": ${hdgst:-false}, 00:22:32.048 "ddgst": ${ddgst:-false} 00:22:32.048 }, 00:22:32.048 "method": "bdev_nvme_attach_controller" 00:22:32.048 } 00:22:32.048 EOF 00:22:32.048 )") 00:22:32.048 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:22:32.048 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:22:32.048 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:22:32.048 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:32.048 "params": { 00:22:32.048 "name": "Nvme1", 00:22:32.048 "trtype": "tcp", 00:22:32.048 "traddr": "10.0.0.2", 00:22:32.048 "adrfam": "ipv4", 00:22:32.048 "trsvcid": "4420", 00:22:32.048 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.048 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.048 "hdgst": false, 00:22:32.048 "ddgst": false 00:22:32.048 }, 00:22:32.048 "method": "bdev_nvme_attach_controller" 00:22:32.048 }' 00:22:32.048 [2024-10-28 04:58:22.598709] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:22:32.048 [2024-10-28 04:58:22.598796] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2347161 ] 00:22:32.306 [2024-10-28 04:58:22.743710] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:32.306 [2024-10-28 04:58:22.772880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:32.306 [2024-10-28 04:58:22.823304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.306 [2024-10-28 04:58:22.823355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.306 [2024-10-28 04:58:22.823358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.563 I/O targets: 00:22:32.563 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:32.563 00:22:32.563 00:22:32.563 CUnit - A unit testing framework for C - Version 2.1-3 00:22:32.563 http://cunit.sourceforge.net/ 00:22:32.563 00:22:32.563 00:22:32.563 Suite: bdevio tests on: Nvme1n1 00:22:32.821 Test: blockdev write read block ...passed 00:22:32.821 Test: blockdev write zeroes read block ...passed 00:22:32.821 Test: blockdev write zeroes read no split ...passed 00:22:32.821 Test: blockdev write zeroes read split ...passed 00:22:32.821 Test: blockdev write zeroes read split partial ...passed 00:22:32.821 Test: blockdev reset ...[2024-10-28 04:58:23.261412] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:32.821 [2024-10-28 04:58:23.261539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148a6a0 (9): Bad file descriptor 00:22:32.821 [2024-10-28 04:58:23.319034] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:32.821 passed 00:22:32.821 Test: blockdev write read 8 blocks ...passed 00:22:32.821 Test: blockdev write read size > 128k ...passed 00:22:32.821 Test: blockdev write read invalid size ...passed 00:22:32.821 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:32.821 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:32.821 Test: blockdev write read max offset ...passed 00:22:33.079 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:33.079 Test: blockdev writev readv 8 blocks ...passed 00:22:33.079 Test: blockdev writev readv 30 x 1block ...passed 00:22:33.079 Test: blockdev writev readv block ...passed 00:22:33.079 Test: blockdev writev readv size > 128k ...passed 00:22:33.079 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:33.079 Test: blockdev comparev and writev ...[2024-10-28 04:58:23.534107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.079 [2024-10-28 04:58:23.534159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.079 [2024-10-28 04:58:23.534186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.079 [2024-10-28 04:58:23.534204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:33.079 [2024-10-28 04:58:23.534573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.079 [2024-10-28 04:58:23.534598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:33.079 [2024-10-28 04:58:23.534620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.079 [2024-10-28 04:58:23.534647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:33.079 [2024-10-28 04:58:23.535013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.079 [2024-10-28 04:58:23.535044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:33.079 [2024-10-28 04:58:23.535068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.079 [2024-10-28 04:58:23.535085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:33.079 [2024-10-28 04:58:23.535447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.079 [2024-10-28 04:58:23.535473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:33.079 [2024-10-28 04:58:23.535495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.079 [2024-10-28 04:58:23.535511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:33.079 passed 00:22:33.079 Test: blockdev nvme passthru rw ...passed 00:22:33.079 Test: blockdev nvme passthru vendor specific ...[2024-10-28 04:58:23.618941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:33.079 [2024-10-28 04:58:23.618970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:33.079 [2024-10-28 04:58:23.619135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:33.079 [2024-10-28 04:58:23.619158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:33.079 [2024-10-28 04:58:23.619319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:33.079 [2024-10-28 04:58:23.619342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:33.079 [2024-10-28 04:58:23.619506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:33.079 [2024-10-28 04:58:23.619530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:33.079 passed 00:22:33.079 Test: blockdev nvme admin passthru ...passed 00:22:33.079 Test: blockdev copy ...passed 00:22:33.079 00:22:33.079 Run Summary: Type Total Ran Passed Failed Inactive 00:22:33.079 suites 1 1 n/a 0 0 00:22:33.079 tests 23 23 23 0 0 00:22:33.079 asserts 152 152 152 0 n/a 00:22:33.079 00:22:33.079 Elapsed time = 1.070 seconds 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.675 rmmod nvme_tcp 00:22:33.675 rmmod nvme_fabrics 00:22:33.675 rmmod nvme_keyring 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 2346981 ']' 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 2346981 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2346981 ']' 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2346981 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2346981 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2346981' 00:22:33.675 killing process with pid 2346981 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2346981 00:22:33.675 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2346981 00:22:33.934 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:33.934 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:33.934 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:33.934 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:33.934 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:22:33.934 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:33.934 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:22:33.934 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:33.934 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:33.934 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.934 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.934 04:58:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:36.469 00:22:36.469 real 0m7.492s 00:22:36.469 user 0m14.239s 00:22:36.469 sys 0m2.701s 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.469 ************************************ 00:22:36.469 END TEST nvmf_bdevio_no_huge 00:22:36.469 ************************************ 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:36.469 ************************************ 00:22:36.469 START TEST nvmf_tls 00:22:36.469 ************************************ 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:36.469 * Looking for test storage... 00:22:36.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1689 -- # lcov --version 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:36.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.469 --rc genhtml_branch_coverage=1 00:22:36.469 --rc genhtml_function_coverage=1 00:22:36.469 --rc genhtml_legend=1 00:22:36.469 --rc geninfo_all_blocks=1 00:22:36.469 --rc geninfo_unexecuted_blocks=1 00:22:36.469 00:22:36.469 ' 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:36.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.469 --rc genhtml_branch_coverage=1 00:22:36.469 --rc genhtml_function_coverage=1 00:22:36.469 --rc genhtml_legend=1 00:22:36.469 --rc geninfo_all_blocks=1 00:22:36.469 --rc geninfo_unexecuted_blocks=1 00:22:36.469 00:22:36.469 ' 00:22:36.469 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:36.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.469 --rc genhtml_branch_coverage=1 00:22:36.469 --rc genhtml_function_coverage=1 00:22:36.469 --rc genhtml_legend=1 00:22:36.469 --rc geninfo_all_blocks=1 00:22:36.469 --rc geninfo_unexecuted_blocks=1 00:22:36.469 00:22:36.470 ' 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:36.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.470 --rc genhtml_branch_coverage=1 00:22:36.470 --rc genhtml_function_coverage=1 00:22:36.470 --rc genhtml_legend=1 00:22:36.470 --rc geninfo_all_blocks=1 00:22:36.470 --rc geninfo_unexecuted_blocks=1 00:22:36.470 00:22:36.470 ' 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:36.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:36.470 04:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.374 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:38.375 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:38.375 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:38.375 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:38.375 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.375 04:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:38.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:22:38.635 00:22:38.635 --- 10.0.0.2 ping statistics --- 00:22:38.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.635 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:22:38.635 00:22:38.635 --- 10.0.0.1 ping statistics --- 00:22:38.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.635 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2349307 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2349307 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2349307 ']' 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:38.635 04:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.635 [2024-10-28 04:58:29.175009] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:22:38.635 [2024-10-28 04:58:29.175094] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.893 [2024-10-28 04:58:29.313764] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:38.893 [2024-10-28 04:58:29.350749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.893 [2024-10-28 04:58:29.397957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.893 [2024-10-28 04:58:29.398018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.893 [2024-10-28 04:58:29.398031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.893 [2024-10-28 04:58:29.398042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.893 [2024-10-28 04:58:29.398051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.893 [2024-10-28 04:58:29.398624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.826 04:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:39.826 04:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:39.826 04:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:39.826 04:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:39.826 04:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.826 04:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.826 04:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:39.826 04:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:40.084 true 00:22:40.084 04:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:40.084 04:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:40.342 04:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:40.342 04:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:40.342 04:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:40.601 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:40.601 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:40.859 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:40.859 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:40.859 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:41.117 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.117 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:41.375 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:41.375 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:41.375 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.375 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:41.633 04:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:41.633 04:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:41.633 04:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:41.891 04:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.891 04:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:42.149 04:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:42.149 04:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:42.149 04:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:42.407 04:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:42.407 04:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.665 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:42.665 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:42.665 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:42.665 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:42.665 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:22:42.665 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:42.665 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:22:42.665 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:22:42.665 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.iAGZa45fPX 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.yYDfHHcAgg 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.iAGZa45fPX 00:22:42.923 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.yYDfHHcAgg 00:22:42.924 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:43.182 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:43.441 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.iAGZa45fPX 00:22:43.441 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.iAGZa45fPX 00:22:43.441 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:44.006 [2024-10-28 04:58:34.297035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.006 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:44.264 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:44.522 [2024-10-28 04:58:34.949138] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:44.522 [2024-10-28 04:58:34.949365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.522 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:44.780 malloc0 00:22:44.780 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:45.037 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.iAGZa45fPX 00:22:45.603 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:45.861 04:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.iAGZa45fPX 00:22:58.060 Initializing NVMe Controllers 00:22:58.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:58.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:58.060 Initialization complete. Launching workers. 00:22:58.060 ======================================================== 00:22:58.060 Latency(us) 00:22:58.060 Device Information : IOPS MiB/s Average min max 00:22:58.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7977.00 31.16 8025.46 1192.15 10100.33 00:22:58.060 ======================================================== 00:22:58.060 Total : 7977.00 31.16 8025.46 1192.15 10100.33 00:22:58.060 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iAGZa45fPX 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iAGZa45fPX 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2351289 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2351289 /var/tmp/bdevperf.sock 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2351289 ']' 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.060 [2024-10-28 04:58:46.516901] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:22:58.060 [2024-10-28 04:58:46.517012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2351289 ] 00:22:58.060 [2024-10-28 04:58:46.651966] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:58.060 [2024-10-28 04:58:46.688296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.060 [2024-10-28 04:58:46.734563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:58.060 04:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iAGZa45fPX 00:22:58.061 04:58:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:58.061 [2024-10-28 04:58:47.427694] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:58.061 TLSTESTn1 00:22:58.061 04:58:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:58.061 Running I/O for 10 seconds... 00:22:59.429 3004.00 IOPS, 11.73 MiB/s [2024-10-28T03:58:50.959Z] 3165.50 IOPS, 12.37 MiB/s [2024-10-28T03:58:51.892Z] 3186.00 IOPS, 12.45 MiB/s [2024-10-28T03:58:52.826Z] 3205.25 IOPS, 12.52 MiB/s [2024-10-28T03:58:53.759Z] 3199.20 IOPS, 12.50 MiB/s [2024-10-28T03:58:54.693Z] 3212.17 IOPS, 12.55 MiB/s [2024-10-28T03:58:55.709Z] 3229.43 IOPS, 12.61 MiB/s [2024-10-28T03:58:56.671Z] 3236.75 IOPS, 12.64 MiB/s [2024-10-28T03:58:58.048Z] 3248.67 IOPS, 12.69 MiB/s [2024-10-28T03:58:58.048Z] 3250.00 IOPS, 12.70 MiB/s 00:23:07.452 Latency(us) 00:23:07.452 [2024-10-28T03:58:58.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.452 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:07.452 Verification LBA range: start 0x0 length 0x2000 00:23:07.452 TLSTESTn1 : 10.03 3251.65 12.70 0.00 0.00 39276.66 8905.21 41460.31 00:23:07.452 [2024-10-28T03:58:58.048Z] =================================================================================================================== 00:23:07.452 [2024-10-28T03:58:58.048Z] Total : 3251.65 12.70 0.00 0.00 39276.66 8905.21 41460.31 00:23:07.452 { 00:23:07.452 "results": [ 00:23:07.452 { 00:23:07.452 "job": "TLSTESTn1", 00:23:07.452 "core_mask": "0x4", 00:23:07.452 "workload": "verify", 00:23:07.452 "status": "finished", 00:23:07.452 "verify_range": { 00:23:07.452 "start": 0, 00:23:07.452 "length": 8192 00:23:07.452 }, 00:23:07.452 "queue_depth": 128, 00:23:07.452 "io_size": 4096, 00:23:07.452 "runtime": 10.033668, 00:23:07.452 "iops": 3251.6523369120846, 00:23:07.452 "mibps": 12.70176694106283, 00:23:07.452 "io_failed": 0, 00:23:07.452 "io_timeout": 0, 00:23:07.452 "avg_latency_us": 39276.66378029819, 00:23:07.452 "min_latency_us": 8905.207351030258, 00:23:07.452 "max_latency_us": 41460.30963430481 00:23:07.452 } 00:23:07.452 ], 00:23:07.452 "core_count": 1 00:23:07.452 } 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2351289 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2351289 ']' 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2351289 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2351289 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2351289' 00:23:07.452 killing process with pid 2351289 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2351289 00:23:07.452 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.452 00:23:07.452 Latency(us) 00:23:07.452 [2024-10-28T03:58:58.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.452 [2024-10-28T03:58:58.048Z] =================================================================================================================== 00:23:07.452 [2024-10-28T03:58:58.048Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2351289 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yYDfHHcAgg 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yYDfHHcAgg 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yYDfHHcAgg 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yYDfHHcAgg 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2352578 00:23:07.452 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.453 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.453 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2352578 /var/tmp/bdevperf.sock 00:23:07.453 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2352578 ']' 00:23:07.453 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.453 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:07.453 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.453 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:07.453 04:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.453 [2024-10-28 04:58:57.940837] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:23:07.453 [2024-10-28 04:58:57.940936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352578 ] 00:23:07.711 [2024-10-28 04:58:58.074247] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:07.711 [2024-10-28 04:58:58.111565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.711 [2024-10-28 04:58:58.159082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.711 04:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:07.711 04:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:07.711 04:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yYDfHHcAgg 00:23:08.277 04:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:08.536 [2024-10-28 04:58:58.908583] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.536 [2024-10-28 04:58:58.918200] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:08.536 [2024-10-28 04:58:58.918644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa785e0 (107): Transport endpoint is not connected 00:23:08.536 [2024-10-28 04:58:58.919629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa785e0 (9): Bad file descriptor 00:23:08.536 [2024-10-28 04:58:58.920611] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:08.536 [2024-10-28 04:58:58.920654] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:08.536 [2024-10-28 04:58:58.920669] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:08.536 [2024-10-28 04:58:58.920689] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:08.536 request: 00:23:08.536 { 00:23:08.536 "name": "TLSTEST", 00:23:08.536 "trtype": "tcp", 00:23:08.536 "traddr": "10.0.0.2", 00:23:08.536 "adrfam": "ipv4", 00:23:08.536 "trsvcid": "4420", 00:23:08.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.536 "prchk_reftag": false, 00:23:08.536 "prchk_guard": false, 00:23:08.536 "hdgst": false, 00:23:08.536 "ddgst": false, 00:23:08.536 "psk": "key0", 00:23:08.536 "allow_unrecognized_csi": false, 00:23:08.536 "method": "bdev_nvme_attach_controller", 00:23:08.536 "req_id": 1 00:23:08.536 } 00:23:08.536 Got JSON-RPC error response 00:23:08.536 response: 00:23:08.536 { 00:23:08.536 "code": -5, 00:23:08.536 "message": "Input/output error" 00:23:08.536 } 00:23:08.536 04:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2352578 00:23:08.536 04:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2352578 ']' 00:23:08.536 04:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2352578 00:23:08.536 04:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:08.536 04:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:08.536 04:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2352578 00:23:08.536 04:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:08.536 04:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:08.536 04:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2352578' 00:23:08.536 killing process with pid 2352578 00:23:08.536 04:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2352578 00:23:08.536 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.536 00:23:08.536 Latency(us) 00:23:08.536 [2024-10-28T03:58:59.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.536 [2024-10-28T03:58:59.132Z] =================================================================================================================== 00:23:08.536 [2024-10-28T03:58:59.132Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:08.536 04:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2352578 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iAGZa45fPX 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iAGZa45fPX 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iAGZa45fPX 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iAGZa45fPX 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2352766 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2352766 /var/tmp/bdevperf.sock 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2352766 ']' 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:08.795 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.795 [2024-10-28 04:58:59.226681] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:23:08.795 [2024-10-28 04:58:59.226770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352766 ] 00:23:08.795 [2024-10-28 04:58:59.361268] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:09.054 [2024-10-28 04:58:59.398111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.054 [2024-10-28 04:58:59.445141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.054 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.054 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:09.054 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iAGZa45fPX 00:23:09.311 04:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:09.571 [2024-10-28 04:59:00.078699] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.571 [2024-10-28 04:59:00.088242] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:09.571 [2024-10-28 04:59:00.088289] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:09.571 [2024-10-28 04:59:00.088355] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:09.571 [2024-10-28 04:59:00.088946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86a5e0 (107): Transport endpoint is not connected 00:23:09.571 [2024-10-28 04:59:00.089934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86a5e0 (9): Bad file descriptor 00:23:09.571 [2024-10-28 04:59:00.090930] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:09.571 [2024-10-28 04:59:00.090971] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:09.571 [2024-10-28 04:59:00.090985] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:09.571 [2024-10-28 04:59:00.091004] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:09.571 request: 00:23:09.571 { 00:23:09.571 "name": "TLSTEST", 00:23:09.571 "trtype": "tcp", 00:23:09.571 "traddr": "10.0.0.2", 00:23:09.571 "adrfam": "ipv4", 00:23:09.571 "trsvcid": "4420", 00:23:09.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.571 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:09.571 "prchk_reftag": false, 00:23:09.571 "prchk_guard": false, 00:23:09.571 "hdgst": false, 00:23:09.571 "ddgst": false, 00:23:09.571 "psk": "key0", 00:23:09.571 "allow_unrecognized_csi": false, 00:23:09.571 "method": "bdev_nvme_attach_controller", 00:23:09.571 "req_id": 1 00:23:09.571 } 00:23:09.571 Got JSON-RPC error response 00:23:09.571 response: 00:23:09.571 { 00:23:09.571 "code": -5, 00:23:09.571 "message": "Input/output error" 00:23:09.571 } 00:23:09.571 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2352766 00:23:09.571 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2352766 ']' 00:23:09.571 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2352766 00:23:09.571 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:09.571 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.571 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2352766 00:23:09.571 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:09.571 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:09.571 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2352766' 00:23:09.571 killing process with pid 2352766 00:23:09.571 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2352766 00:23:09.571 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.571 00:23:09.571 Latency(us) 00:23:09.571 [2024-10-28T03:59:00.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.571 [2024-10-28T03:59:00.167Z] =================================================================================================================== 00:23:09.571 [2024-10-28T03:59:00.167Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:09.571 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2352766 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iAGZa45fPX 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iAGZa45fPX 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iAGZa45fPX 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iAGZa45fPX 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2352918 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2352918 /var/tmp/bdevperf.sock 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2352918 ']' 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:09.830 04:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.830 [2024-10-28 04:59:00.390689] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:23:09.830 [2024-10-28 04:59:00.390788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352918 ] 00:23:10.090 [2024-10-28 04:59:00.525386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:10.090 [2024-10-28 04:59:00.560313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.090 [2024-10-28 04:59:00.604794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.027 04:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:11.027 04:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:11.027 04:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iAGZa45fPX 00:23:11.285 04:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.543 [2024-10-28 04:59:02.068658] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.543 [2024-10-28 04:59:02.078574] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:11.543 [2024-10-28 04:59:02.078608] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:11.543 [2024-10-28 04:59:02.078669] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:11.543 [2024-10-28 04:59:02.078776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe045e0 (107): Transport endpoint is not connected 00:23:11.543 [2024-10-28 04:59:02.079765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe045e0 (9): Bad file descriptor 00:23:11.543 [2024-10-28 04:59:02.080762] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:11.543 [2024-10-28 04:59:02.080785] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:11.544 [2024-10-28 04:59:02.080799] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:11.544 [2024-10-28 04:59:02.080822] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:11.544 request: 00:23:11.544 { 00:23:11.544 "name": "TLSTEST", 00:23:11.544 "trtype": "tcp", 00:23:11.544 "traddr": "10.0.0.2", 00:23:11.544 "adrfam": "ipv4", 00:23:11.544 "trsvcid": "4420", 00:23:11.544 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:11.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.544 "prchk_reftag": false, 00:23:11.544 "prchk_guard": false, 00:23:11.544 "hdgst": false, 00:23:11.544 "ddgst": false, 00:23:11.544 "psk": "key0", 00:23:11.544 "allow_unrecognized_csi": false, 00:23:11.544 "method": "bdev_nvme_attach_controller", 00:23:11.544 "req_id": 1 00:23:11.544 } 00:23:11.544 Got JSON-RPC error response 00:23:11.544 response: 00:23:11.544 { 00:23:11.544 "code": -5, 00:23:11.544 "message": "Input/output error" 00:23:11.544 } 00:23:11.544 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2352918 00:23:11.544 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2352918 ']' 00:23:11.544 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2352918 00:23:11.544 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:11.544 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:11.544 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2352918 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2352918' 00:23:11.802 killing process with pid 2352918 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2352918 00:23:11.802 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.802 00:23:11.802 Latency(us) 00:23:11.802 [2024-10-28T03:59:02.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.802 [2024-10-28T03:59:02.398Z] =================================================================================================================== 00:23:11.802 [2024-10-28T03:59:02.398Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2352918 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2353115 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2353115 /var/tmp/bdevperf.sock 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2353115 ']' 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:11.802 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.802 [2024-10-28 04:59:02.386109] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:23:11.802 [2024-10-28 04:59:02.386207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2353115 ] 00:23:12.061 [2024-10-28 04:59:02.518627] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:12.061 [2024-10-28 04:59:02.554472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.061 [2024-10-28 04:59:02.599439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.320 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:12.320 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:12.320 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:12.579 [2024-10-28 04:59:02.975946] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:12.579 [2024-10-28 04:59:02.975986] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:12.579 request: 00:23:12.579 { 00:23:12.579 "name": "key0", 00:23:12.579 "path": "", 00:23:12.579 "method": "keyring_file_add_key", 00:23:12.579 "req_id": 1 00:23:12.579 } 00:23:12.579 Got JSON-RPC error response 00:23:12.579 response: 00:23:12.579 { 00:23:12.579 "code": -1, 00:23:12.579 "message": "Operation not permitted" 00:23:12.579 } 00:23:12.579 04:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:12.837 [2024-10-28 04:59:03.252121] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.837 [2024-10-28 04:59:03.252175] bdev_nvme.c:6529:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:12.837 request: 00:23:12.837 { 00:23:12.837 "name": "TLSTEST", 00:23:12.837 "trtype": "tcp", 00:23:12.837 "traddr": "10.0.0.2", 00:23:12.837 "adrfam": "ipv4", 00:23:12.837 "trsvcid": "4420", 00:23:12.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.837 "prchk_reftag": false, 00:23:12.837 "prchk_guard": false, 00:23:12.837 "hdgst": false, 00:23:12.837 "ddgst": false, 00:23:12.837 "psk": "key0", 00:23:12.837 "allow_unrecognized_csi": false, 00:23:12.837 "method": "bdev_nvme_attach_controller", 00:23:12.837 "req_id": 1 00:23:12.837 } 00:23:12.837 Got JSON-RPC error response 00:23:12.837 response: 00:23:12.837 { 00:23:12.837 "code": -126, 00:23:12.837 "message": "Required key not available" 00:23:12.837 } 00:23:12.837 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2353115 00:23:12.837 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2353115 ']' 00:23:12.837 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2353115 00:23:12.837 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:12.837 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:12.837 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2353115 00:23:12.837 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:12.837 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:12.837 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2353115' 00:23:12.837 killing process with pid 2353115 00:23:12.837 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2353115 00:23:12.837 Received shutdown signal, test time was about 10.000000 seconds 00:23:12.837 00:23:12.837 Latency(us) 00:23:12.837 [2024-10-28T03:59:03.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.837 [2024-10-28T03:59:03.433Z] =================================================================================================================== 00:23:12.837 [2024-10-28T03:59:03.433Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:12.837 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2353115 00:23:13.095 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:13.095 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:13.096 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:13.096 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:13.096 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:13.096 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2349307 00:23:13.096 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2349307 ']' 00:23:13.096 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2349307 00:23:13.096 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:13.096 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:13.096 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2349307 00:23:13.096 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:13.096 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:13.096 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2349307' 00:23:13.096 killing process with pid 2349307 00:23:13.096 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2349307 00:23:13.096 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2349307 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.BVlPff9GKA 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.BVlPff9GKA 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2353374 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2353374 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2353374 ']' 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:13.354 04:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.354 [2024-10-28 04:59:03.844825] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:23:13.354 [2024-10-28 04:59:03.844913] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.614 [2024-10-28 04:59:03.983630] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:13.614 [2024-10-28 04:59:04.023972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.614 [2024-10-28 04:59:04.071549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.614 [2024-10-28 04:59:04.071641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.614 [2024-10-28 04:59:04.071661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.614 [2024-10-28 04:59:04.071686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.614 [2024-10-28 04:59:04.071698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.614 [2024-10-28 04:59:04.072390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.548 04:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:14.548 04:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:14.548 04:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:14.548 04:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:14.548 04:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.548 04:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.548 04:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.BVlPff9GKA 00:23:14.548 04:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BVlPff9GKA 00:23:14.548 04:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:14.806 [2024-10-28 04:59:05.215181] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.806 04:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:15.065 04:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:15.324 [2024-10-28 04:59:05.823388] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:15.324 [2024-10-28 04:59:05.823710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.324 04:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:15.584 malloc0 00:23:15.584 04:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:15.842 04:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BVlPff9GKA 00:23:16.408 04:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.666 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BVlPff9GKA 00:23:16.666 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:16.666 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:16.666 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:16.666 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BVlPff9GKA 00:23:16.666 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.666 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2353678 00:23:16.666 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.666 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2353678 /var/tmp/bdevperf.sock 00:23:16.666 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:16.666 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2353678 ']' 00:23:16.666 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.666 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.666 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.666 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.666 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.666 [2024-10-28 04:59:07.096877] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:23:16.667 [2024-10-28 04:59:07.096999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2353678 ] 00:23:16.667 [2024-10-28 04:59:07.232437] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:16.924 [2024-10-28 04:59:07.268829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.924 [2024-10-28 04:59:07.313967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.924 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.924 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:16.924 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BVlPff9GKA 00:23:17.182 04:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:17.748 [2024-10-28 04:59:08.054252] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.749 TLSTESTn1 00:23:17.749 04:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:17.749 Running I/O for 10 seconds... 00:23:20.069 2956.00 IOPS, 11.55 MiB/s [2024-10-28T03:59:11.601Z] 3027.50 IOPS, 11.83 MiB/s [2024-10-28T03:59:12.541Z] 3062.67 IOPS, 11.96 MiB/s [2024-10-28T03:59:13.482Z] 3067.50 IOPS, 11.98 MiB/s [2024-10-28T03:59:14.419Z] 3068.80 IOPS, 11.99 MiB/s [2024-10-28T03:59:15.361Z] 3070.17 IOPS, 11.99 MiB/s [2024-10-28T03:59:16.300Z] 3069.43 IOPS, 11.99 MiB/s [2024-10-28T03:59:17.679Z] 3065.12 IOPS, 11.97 MiB/s [2024-10-28T03:59:18.614Z] 3076.56 IOPS, 12.02 MiB/s [2024-10-28T03:59:18.614Z] 3079.70 IOPS, 12.03 MiB/s 00:23:28.018 Latency(us) 00:23:28.018 [2024-10-28T03:59:18.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.018 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:28.018 Verification LBA range: start 0x0 length 0x2000 00:23:28.018 TLSTESTn1 : 10.03 3083.06 12.04 0.00 0.00 41435.77 7250.69 58784.10 00:23:28.018 [2024-10-28T03:59:18.614Z] =================================================================================================================== 00:23:28.018 [2024-10-28T03:59:18.614Z] Total : 3083.06 12.04 0.00 0.00 41435.77 7250.69 58784.10 00:23:28.018 { 00:23:28.018 "results": [ 00:23:28.018 { 00:23:28.018 "job": "TLSTESTn1", 00:23:28.018 "core_mask": "0x4", 00:23:28.018 "workload": "verify", 00:23:28.018 "status": "finished", 00:23:28.018 "verify_range": { 00:23:28.018 "start": 0, 00:23:28.018 "length": 8192 00:23:28.018 }, 00:23:28.018 "queue_depth": 128, 00:23:28.018 "io_size": 4096, 00:23:28.018 "runtime": 10.030297, 00:23:28.018 "iops": 3083.0592553739934, 00:23:28.018 "mibps": 12.043200216304662, 00:23:28.018 "io_failed": 0, 00:23:28.018 "io_timeout": 0, 00:23:28.018 "avg_latency_us": 41435.76524243811, 00:23:28.018 "min_latency_us": 7250.687952478188, 00:23:28.018 "max_latency_us": 58784.10098385001 00:23:28.018 } 00:23:28.018 ], 00:23:28.018 "core_count": 1 00:23:28.018 } 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2353678 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2353678 ']' 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2353678 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2353678 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2353678' 00:23:28.018 killing process with pid 2353678 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2353678 00:23:28.018 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.018 00:23:28.018 Latency(us) 00:23:28.018 [2024-10-28T03:59:18.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.018 [2024-10-28T03:59:18.614Z] =================================================================================================================== 00:23:28.018 [2024-10-28T03:59:18.614Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2353678 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.BVlPff9GKA 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BVlPff9GKA 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BVlPff9GKA 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BVlPff9GKA 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BVlPff9GKA 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2354963 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2354963 /var/tmp/bdevperf.sock 00:23:28.018 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2354963 ']' 00:23:28.019 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.019 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.019 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.019 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.019 04:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.019 [2024-10-28 04:59:18.583993] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:23:28.019 [2024-10-28 04:59:18.584085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2354963 ] 00:23:28.279 [2024-10-28 04:59:18.723606] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:28.279 [2024-10-28 04:59:18.760207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.279 [2024-10-28 04:59:18.809288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.217 04:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:29.217 04:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:29.217 04:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BVlPff9GKA 00:23:29.475 [2024-10-28 04:59:19.893264] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BVlPff9GKA': 0100666 00:23:29.475 [2024-10-28 04:59:19.893318] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:29.475 request: 00:23:29.475 { 00:23:29.475 "name": "key0", 00:23:29.475 "path": "/tmp/tmp.BVlPff9GKA", 00:23:29.475 "method": "keyring_file_add_key", 00:23:29.475 "req_id": 1 00:23:29.475 } 00:23:29.475 Got JSON-RPC error response 00:23:29.475 response: 00:23:29.475 { 00:23:29.475 "code": -1, 00:23:29.475 "message": "Operation not permitted" 00:23:29.475 } 00:23:29.475 04:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:29.736 [2024-10-28 04:59:20.193444] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.736 [2024-10-28 04:59:20.193517] bdev_nvme.c:6529:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:29.736 request: 00:23:29.736 { 00:23:29.736 "name": "TLSTEST", 00:23:29.736 "trtype": "tcp", 00:23:29.736 "traddr": "10.0.0.2", 00:23:29.736 "adrfam": "ipv4", 00:23:29.736 "trsvcid": "4420", 00:23:29.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.736 "prchk_reftag": false, 00:23:29.736 "prchk_guard": false, 00:23:29.736 "hdgst": false, 00:23:29.736 "ddgst": false, 00:23:29.736 "psk": "key0", 00:23:29.736 "allow_unrecognized_csi": false, 00:23:29.736 "method": "bdev_nvme_attach_controller", 00:23:29.736 "req_id": 1 00:23:29.736 } 00:23:29.736 Got JSON-RPC error response 00:23:29.736 response: 00:23:29.736 { 00:23:29.736 "code": -126, 00:23:29.736 "message": "Required key not available" 00:23:29.736 } 00:23:29.736 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2354963 00:23:29.736 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2354963 ']' 00:23:29.736 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2354963 00:23:29.736 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:29.736 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.736 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2354963 00:23:29.736 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:29.736 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:29.736 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2354963' 00:23:29.736 killing process with pid 2354963 00:23:29.736 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2354963 00:23:29.736 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.736 00:23:29.736 Latency(us) 00:23:29.736 [2024-10-28T03:59:20.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.736 [2024-10-28T03:59:20.332Z] =================================================================================================================== 00:23:29.736 [2024-10-28T03:59:20.332Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:29.736 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2354963 00:23:29.997 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:29.997 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:29.997 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:29.997 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:29.997 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:29.997 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2353374 00:23:29.997 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2353374 ']' 00:23:29.997 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2353374 00:23:29.997 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:29.997 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.997 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2353374 00:23:29.997 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:29.997 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:29.997 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2353374' 00:23:29.997 killing process with pid 2353374 00:23:29.997 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2353374 00:23:29.997 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2353374 00:23:30.256 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:30.256 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:30.256 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:30.256 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.256 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2355238 00:23:30.256 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:30.256 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2355238 00:23:30.256 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2355238 ']' 00:23:30.256 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.256 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:30.257 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.257 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:30.257 04:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.257 [2024-10-28 04:59:20.769894] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:23:30.257 [2024-10-28 04:59:20.769983] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.516 [2024-10-28 04:59:20.906221] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:30.516 [2024-10-28 04:59:20.947082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.516 [2024-10-28 04:59:20.995200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.516 [2024-10-28 04:59:20.995272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.516 [2024-10-28 04:59:20.995290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.516 [2024-10-28 04:59:20.995304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.516 [2024-10-28 04:59:20.995316] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.516 [2024-10-28 04:59:20.995989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.775 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.775 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:30.775 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:30.775 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:30.775 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.775 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.775 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.BVlPff9GKA 00:23:30.775 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:30.775 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.BVlPff9GKA 00:23:30.775 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:30.775 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.775 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:30.775 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.775 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.BVlPff9GKA 00:23:30.775 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BVlPff9GKA 00:23:30.775 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:31.034 [2024-10-28 04:59:21.419185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.034 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:31.293 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:31.552 [2024-10-28 04:59:21.951303] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:31.552 [2024-10-28 04:59:21.951564] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.552 04:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:31.810 malloc0 00:23:31.810 04:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:32.068 04:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BVlPff9GKA 00:23:32.327 [2024-10-28 04:59:22.787801] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BVlPff9GKA': 0100666 00:23:32.327 [2024-10-28 04:59:22.787840] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:32.327 request: 00:23:32.327 { 00:23:32.327 "name": "key0", 00:23:32.327 "path": "/tmp/tmp.BVlPff9GKA", 00:23:32.327 "method": "keyring_file_add_key", 00:23:32.327 "req_id": 1 00:23:32.327 } 00:23:32.327 Got JSON-RPC error response 00:23:32.327 response: 00:23:32.327 { 00:23:32.327 "code": -1, 00:23:32.327 "message": "Operation not permitted" 00:23:32.327 } 00:23:32.327 04:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:32.585 [2024-10-28 04:59:23.063986] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:32.585 [2024-10-28 04:59:23.064043] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:32.585 request: 00:23:32.585 { 00:23:32.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.585 "host": "nqn.2016-06.io.spdk:host1", 00:23:32.585 "psk": "key0", 00:23:32.585 "method": "nvmf_subsystem_add_host", 00:23:32.585 "req_id": 1 00:23:32.585 } 00:23:32.585 Got JSON-RPC error response 00:23:32.585 response: 00:23:32.585 { 00:23:32.585 "code": -32603, 00:23:32.585 "message": "Internal error" 00:23:32.585 } 00:23:32.585 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:32.585 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:32.585 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:32.585 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:32.585 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2355238 00:23:32.585 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2355238 ']' 00:23:32.585 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2355238 00:23:32.585 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:32.585 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:32.586 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2355238 00:23:32.586 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:32.586 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:32.586 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2355238' 00:23:32.586 killing process with pid 2355238 00:23:32.586 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2355238 00:23:32.586 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2355238 00:23:32.845 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.BVlPff9GKA 00:23:32.845 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:32.845 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:32.845 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:32.845 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.845 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2355646 00:23:32.845 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:32.845 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2355646 00:23:32.845 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2355646 ']' 00:23:32.845 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.845 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:32.845 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.845 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:32.845 04:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.845 [2024-10-28 04:59:23.384089] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:23:32.845 [2024-10-28 04:59:23.384195] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.105 [2024-10-28 04:59:23.522065] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:33.105 [2024-10-28 04:59:23.557258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.105 [2024-10-28 04:59:23.601344] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.105 [2024-10-28 04:59:23.601429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.105 [2024-10-28 04:59:23.601443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.105 [2024-10-28 04:59:23.601463] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.105 [2024-10-28 04:59:23.601472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.105 [2024-10-28 04:59:23.602089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.044 04:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:34.044 04:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:34.044 04:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:34.044 04:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:34.044 04:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.044 04:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.044 04:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.BVlPff9GKA 00:23:34.044 04:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BVlPff9GKA 00:23:34.044 04:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:34.302 [2024-10-28 04:59:24.760959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.302 04:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:34.561 04:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:34.821 [2024-10-28 04:59:25.341132] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:34.821 [2024-10-28 04:59:25.341402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.821 04:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:35.079 malloc0 00:23:35.080 04:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:35.649 04:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BVlPff9GKA 00:23:35.946 04:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:36.230 04:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2356019 00:23:36.230 04:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:36.230 04:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2356019 /var/tmp/bdevperf.sock 00:23:36.230 04:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2356019 ']' 00:23:36.230 04:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:36.230 04:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.230 04:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:36.230 04:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.230 04:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:36.230 04:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.230 [2024-10-28 04:59:26.638221] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:23:36.230 [2024-10-28 04:59:26.638297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356019 ] 00:23:36.230 [2024-10-28 04:59:26.770762] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:36.230 [2024-10-28 04:59:26.806012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.490 [2024-10-28 04:59:26.853271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.429 04:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:37.429 04:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:37.429 04:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BVlPff9GKA 00:23:37.429 04:59:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:37.996 [2024-10-28 04:59:28.321938] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:37.996 TLSTESTn1 00:23:37.996 04:59:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:38.255 04:59:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:38.255 "subsystems": [ 00:23:38.255 { 00:23:38.255 "subsystem": "keyring", 00:23:38.255 "config": [ 00:23:38.255 { 00:23:38.255 "method": "keyring_file_add_key", 00:23:38.255 "params": { 00:23:38.255 "name": "key0", 00:23:38.255 "path": "/tmp/tmp.BVlPff9GKA" 00:23:38.255 } 00:23:38.255 } 00:23:38.255 ] 00:23:38.255 }, 00:23:38.255 { 00:23:38.255 "subsystem": "iobuf", 00:23:38.255 "config": [ 00:23:38.255 { 00:23:38.255 "method": "iobuf_set_options", 00:23:38.255 "params": { 00:23:38.255 "small_pool_count": 8192, 00:23:38.255 "large_pool_count": 1024, 00:23:38.255 "small_bufsize": 8192, 00:23:38.255 "large_bufsize": 135168, 00:23:38.255 "enable_numa": false 00:23:38.255 } 00:23:38.255 } 00:23:38.255 ] 00:23:38.255 }, 00:23:38.255 { 00:23:38.255 "subsystem": "sock", 00:23:38.255 "config": [ 00:23:38.255 { 00:23:38.255 "method": "sock_set_default_impl", 00:23:38.255 "params": { 00:23:38.255 "impl_name": "posix" 00:23:38.255 } 00:23:38.255 }, 00:23:38.255 { 00:23:38.255 "method": "sock_impl_set_options", 00:23:38.255 "params": { 00:23:38.255 "impl_name": "ssl", 00:23:38.255 "recv_buf_size": 4096, 00:23:38.255 "send_buf_size": 4096, 00:23:38.255 "enable_recv_pipe": true, 00:23:38.255 "enable_quickack": false, 00:23:38.255 "enable_placement_id": 0, 00:23:38.255 "enable_zerocopy_send_server": true, 00:23:38.255 "enable_zerocopy_send_client": false, 00:23:38.255 "zerocopy_threshold": 0, 00:23:38.255 "tls_version": 0, 00:23:38.255 "enable_ktls": false 00:23:38.255 } 00:23:38.255 }, 00:23:38.255 { 00:23:38.255 "method": "sock_impl_set_options", 00:23:38.255 "params": { 00:23:38.255 "impl_name": "posix", 00:23:38.255 "recv_buf_size": 2097152, 00:23:38.255 "send_buf_size": 2097152, 00:23:38.255 "enable_recv_pipe": true, 00:23:38.255 "enable_quickack": false, 00:23:38.255 "enable_placement_id": 0, 00:23:38.255 "enable_zerocopy_send_server": true, 00:23:38.255 "enable_zerocopy_send_client": false, 00:23:38.255 "zerocopy_threshold": 0, 00:23:38.255 "tls_version": 0, 00:23:38.255 "enable_ktls": false 00:23:38.255 } 00:23:38.255 } 00:23:38.255 ] 00:23:38.255 }, 00:23:38.255 { 00:23:38.255 "subsystem": "vmd", 00:23:38.255 "config": [] 00:23:38.255 }, 00:23:38.255 { 00:23:38.255 "subsystem": "accel", 00:23:38.255 "config": [ 00:23:38.255 { 00:23:38.255 "method": "accel_set_options", 00:23:38.255 "params": { 00:23:38.255 "small_cache_size": 128, 00:23:38.255 "large_cache_size": 16, 00:23:38.255 "task_count": 2048, 00:23:38.255 "sequence_count": 2048, 00:23:38.255 "buf_count": 2048 00:23:38.255 } 00:23:38.255 } 00:23:38.255 ] 00:23:38.255 }, 00:23:38.255 { 00:23:38.255 "subsystem": "bdev", 00:23:38.255 "config": [ 00:23:38.255 { 00:23:38.255 "method": "bdev_set_options", 00:23:38.255 "params": { 00:23:38.255 "bdev_io_pool_size": 65535, 00:23:38.255 "bdev_io_cache_size": 256, 00:23:38.255 "bdev_auto_examine": true, 00:23:38.255 "iobuf_small_cache_size": 128, 00:23:38.255 "iobuf_large_cache_size": 16 00:23:38.255 } 00:23:38.255 }, 00:23:38.255 { 00:23:38.255 "method": "bdev_raid_set_options", 00:23:38.255 "params": { 00:23:38.255 "process_window_size_kb": 1024, 00:23:38.255 "process_max_bandwidth_mb_sec": 0 00:23:38.255 } 00:23:38.255 }, 00:23:38.255 { 00:23:38.255 "method": "bdev_iscsi_set_options", 00:23:38.255 "params": { 00:23:38.255 "timeout_sec": 30 00:23:38.255 } 00:23:38.255 }, 00:23:38.255 { 00:23:38.255 "method": "bdev_nvme_set_options", 00:23:38.255 "params": { 00:23:38.255 "action_on_timeout": "none", 00:23:38.255 "timeout_us": 0, 00:23:38.255 "timeout_admin_us": 0, 00:23:38.255 "keep_alive_timeout_ms": 10000, 00:23:38.255 "arbitration_burst": 0, 00:23:38.255 "low_priority_weight": 0, 00:23:38.255 "medium_priority_weight": 0, 00:23:38.255 "high_priority_weight": 0, 00:23:38.255 "nvme_adminq_poll_period_us": 10000, 00:23:38.255 "nvme_ioq_poll_period_us": 0, 00:23:38.255 "io_queue_requests": 0, 00:23:38.255 "delay_cmd_submit": true, 00:23:38.255 "transport_retry_count": 4, 00:23:38.255 "bdev_retry_count": 3, 00:23:38.255 "transport_ack_timeout": 0, 00:23:38.255 "ctrlr_loss_timeout_sec": 0, 00:23:38.255 "reconnect_delay_sec": 0, 00:23:38.255 "fast_io_fail_timeout_sec": 0, 00:23:38.255 "disable_auto_failback": false, 00:23:38.255 "generate_uuids": false, 00:23:38.255 "transport_tos": 0, 00:23:38.255 "nvme_error_stat": false, 00:23:38.255 "rdma_srq_size": 0, 00:23:38.255 "io_path_stat": false, 00:23:38.255 "allow_accel_sequence": false, 00:23:38.255 "rdma_max_cq_size": 0, 00:23:38.255 "rdma_cm_event_timeout_ms": 0, 00:23:38.255 "dhchap_digests": [ 00:23:38.255 "sha256", 00:23:38.255 "sha384", 00:23:38.255 "sha512" 00:23:38.255 ], 00:23:38.255 "dhchap_dhgroups": [ 00:23:38.255 "null", 00:23:38.255 "ffdhe2048", 00:23:38.255 "ffdhe3072", 00:23:38.255 "ffdhe4096", 00:23:38.255 "ffdhe6144", 00:23:38.255 "ffdhe8192" 00:23:38.255 ] 00:23:38.255 } 00:23:38.255 }, 00:23:38.255 { 00:23:38.255 "method": "bdev_nvme_set_hotplug", 00:23:38.255 "params": { 00:23:38.255 "period_us": 100000, 00:23:38.255 "enable": false 00:23:38.255 } 00:23:38.255 }, 00:23:38.255 { 00:23:38.256 "method": "bdev_malloc_create", 00:23:38.256 "params": { 00:23:38.256 "name": "malloc0", 00:23:38.256 "num_blocks": 8192, 00:23:38.256 "block_size": 4096, 00:23:38.256 "physical_block_size": 4096, 00:23:38.256 "uuid": "20ab89c8-30f0-4c01-84c0-463ffb552448", 00:23:38.256 "optimal_io_boundary": 0, 00:23:38.256 "md_size": 0, 00:23:38.256 "dif_type": 0, 00:23:38.256 "dif_is_head_of_md": false, 00:23:38.256 "dif_pi_format": 0 00:23:38.256 } 00:23:38.256 }, 00:23:38.256 { 00:23:38.256 "method": "bdev_wait_for_examine" 00:23:38.256 } 00:23:38.256 ] 00:23:38.256 }, 00:23:38.256 { 00:23:38.256 "subsystem": "nbd", 00:23:38.256 "config": [] 00:23:38.256 }, 00:23:38.256 { 00:23:38.256 "subsystem": "scheduler", 00:23:38.256 "config": [ 00:23:38.256 { 00:23:38.256 "method": "framework_set_scheduler", 00:23:38.256 "params": { 00:23:38.256 "name": "static" 00:23:38.256 } 00:23:38.256 } 00:23:38.256 ] 00:23:38.256 }, 00:23:38.256 { 00:23:38.256 "subsystem": "nvmf", 00:23:38.256 "config": [ 00:23:38.256 { 00:23:38.256 "method": "nvmf_set_config", 00:23:38.256 "params": { 00:23:38.256 "discovery_filter": "match_any", 00:23:38.256 "admin_cmd_passthru": { 00:23:38.256 "identify_ctrlr": false 00:23:38.256 }, 00:23:38.256 "dhchap_digests": [ 00:23:38.256 "sha256", 00:23:38.256 "sha384", 00:23:38.256 "sha512" 00:23:38.256 ], 00:23:38.256 "dhchap_dhgroups": [ 00:23:38.256 "null", 00:23:38.256 "ffdhe2048", 00:23:38.256 "ffdhe3072", 00:23:38.256 "ffdhe4096", 00:23:38.256 "ffdhe6144", 00:23:38.256 "ffdhe8192" 00:23:38.256 ] 00:23:38.256 } 00:23:38.256 }, 00:23:38.256 { 00:23:38.256 "method": "nvmf_set_max_subsystems", 00:23:38.256 "params": { 00:23:38.256 "max_subsystems": 1024 00:23:38.256 } 00:23:38.256 }, 00:23:38.256 { 00:23:38.256 "method": "nvmf_set_crdt", 00:23:38.256 "params": { 00:23:38.256 "crdt1": 0, 00:23:38.256 "crdt2": 0, 00:23:38.256 "crdt3": 0 00:23:38.256 } 00:23:38.256 }, 00:23:38.256 { 00:23:38.256 "method": "nvmf_create_transport", 00:23:38.256 "params": { 00:23:38.256 "trtype": "TCP", 00:23:38.256 "max_queue_depth": 128, 00:23:38.256 "max_io_qpairs_per_ctrlr": 127, 00:23:38.256 "in_capsule_data_size": 4096, 00:23:38.256 "max_io_size": 131072, 00:23:38.256 "io_unit_size": 131072, 00:23:38.256 "max_aq_depth": 128, 00:23:38.256 "num_shared_buffers": 511, 00:23:38.256 "buf_cache_size": 4294967295, 00:23:38.256 "dif_insert_or_strip": false, 00:23:38.256 "zcopy": false, 00:23:38.256 "c2h_success": false, 00:23:38.256 "sock_priority": 0, 00:23:38.256 "abort_timeout_sec": 1, 00:23:38.256 "ack_timeout": 0, 00:23:38.256 "data_wr_pool_size": 0 00:23:38.256 } 00:23:38.256 }, 00:23:38.256 { 00:23:38.256 "method": "nvmf_create_subsystem", 00:23:38.256 "params": { 00:23:38.256 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.256 "allow_any_host": false, 00:23:38.256 "serial_number": "SPDK00000000000001", 00:23:38.256 "model_number": "SPDK bdev Controller", 00:23:38.256 "max_namespaces": 10, 00:23:38.256 "min_cntlid": 1, 00:23:38.256 "max_cntlid": 65519, 00:23:38.256 "ana_reporting": false 00:23:38.256 } 00:23:38.256 }, 00:23:38.256 { 00:23:38.256 "method": "nvmf_subsystem_add_host", 00:23:38.256 "params": { 00:23:38.256 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.256 "host": "nqn.2016-06.io.spdk:host1", 00:23:38.256 "psk": "key0" 00:23:38.256 } 00:23:38.256 }, 00:23:38.256 { 00:23:38.256 "method": "nvmf_subsystem_add_ns", 00:23:38.256 "params": { 00:23:38.256 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.256 "namespace": { 00:23:38.256 "nsid": 1, 00:23:38.256 "bdev_name": "malloc0", 00:23:38.256 "nguid": "20AB89C830F04C0184C0463FFB552448", 00:23:38.256 "uuid": "20ab89c8-30f0-4c01-84c0-463ffb552448", 00:23:38.256 "no_auto_visible": false 00:23:38.256 } 00:23:38.256 } 00:23:38.256 }, 00:23:38.256 { 00:23:38.256 "method": "nvmf_subsystem_add_listener", 00:23:38.256 "params": { 00:23:38.256 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.256 "listen_address": { 00:23:38.256 "trtype": "TCP", 00:23:38.256 "adrfam": "IPv4", 00:23:38.256 "traddr": "10.0.0.2", 00:23:38.256 "trsvcid": "4420" 00:23:38.256 }, 00:23:38.256 "secure_channel": true 00:23:38.256 } 00:23:38.256 } 00:23:38.256 ] 00:23:38.256 } 00:23:38.256 ] 00:23:38.256 }' 00:23:38.256 04:59:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:38.825 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:38.825 "subsystems": [ 00:23:38.825 { 00:23:38.825 "subsystem": "keyring", 00:23:38.825 "config": [ 00:23:38.825 { 00:23:38.825 "method": "keyring_file_add_key", 00:23:38.825 "params": { 00:23:38.825 "name": "key0", 00:23:38.825 "path": "/tmp/tmp.BVlPff9GKA" 00:23:38.825 } 00:23:38.825 } 00:23:38.825 ] 00:23:38.825 }, 00:23:38.825 { 00:23:38.825 "subsystem": "iobuf", 00:23:38.825 "config": [ 00:23:38.825 { 00:23:38.825 "method": "iobuf_set_options", 00:23:38.825 "params": { 00:23:38.825 "small_pool_count": 8192, 00:23:38.825 "large_pool_count": 1024, 00:23:38.825 "small_bufsize": 8192, 00:23:38.825 "large_bufsize": 135168, 00:23:38.825 "enable_numa": false 00:23:38.825 } 00:23:38.825 } 00:23:38.825 ] 00:23:38.825 }, 00:23:38.825 { 00:23:38.825 "subsystem": "sock", 00:23:38.825 "config": [ 00:23:38.825 { 00:23:38.825 "method": "sock_set_default_impl", 00:23:38.825 "params": { 00:23:38.825 "impl_name": "posix" 00:23:38.825 } 00:23:38.825 }, 00:23:38.825 { 00:23:38.825 "method": "sock_impl_set_options", 00:23:38.825 "params": { 00:23:38.825 "impl_name": "ssl", 00:23:38.825 "recv_buf_size": 4096, 00:23:38.825 "send_buf_size": 4096, 00:23:38.825 "enable_recv_pipe": true, 00:23:38.825 "enable_quickack": false, 00:23:38.825 "enable_placement_id": 0, 00:23:38.825 "enable_zerocopy_send_server": true, 00:23:38.825 "enable_zerocopy_send_client": false, 00:23:38.825 "zerocopy_threshold": 0, 00:23:38.825 "tls_version": 0, 00:23:38.825 "enable_ktls": false 00:23:38.825 } 00:23:38.825 }, 00:23:38.825 { 00:23:38.825 "method": "sock_impl_set_options", 00:23:38.825 "params": { 00:23:38.825 "impl_name": "posix", 00:23:38.825 "recv_buf_size": 2097152, 00:23:38.825 "send_buf_size": 2097152, 00:23:38.825 "enable_recv_pipe": true, 00:23:38.825 "enable_quickack": false, 00:23:38.825 "enable_placement_id": 0, 00:23:38.825 "enable_zerocopy_send_server": true, 00:23:38.825 "enable_zerocopy_send_client": false, 00:23:38.825 "zerocopy_threshold": 0, 00:23:38.825 "tls_version": 0, 00:23:38.825 "enable_ktls": false 00:23:38.825 } 00:23:38.825 } 00:23:38.825 ] 00:23:38.825 }, 00:23:38.825 { 00:23:38.825 "subsystem": "vmd", 00:23:38.825 "config": [] 00:23:38.825 }, 00:23:38.825 { 00:23:38.825 "subsystem": "accel", 00:23:38.826 "config": [ 00:23:38.826 { 00:23:38.826 "method": "accel_set_options", 00:23:38.826 "params": { 00:23:38.826 "small_cache_size": 128, 00:23:38.826 "large_cache_size": 16, 00:23:38.826 "task_count": 2048, 00:23:38.826 "sequence_count": 2048, 00:23:38.826 "buf_count": 2048 00:23:38.826 } 00:23:38.826 } 00:23:38.826 ] 00:23:38.826 }, 00:23:38.826 { 00:23:38.826 "subsystem": "bdev", 00:23:38.826 "config": [ 00:23:38.826 { 00:23:38.826 "method": "bdev_set_options", 00:23:38.826 "params": { 00:23:38.826 "bdev_io_pool_size": 65535, 00:23:38.826 "bdev_io_cache_size": 256, 00:23:38.826 "bdev_auto_examine": true, 00:23:38.826 "iobuf_small_cache_size": 128, 00:23:38.826 "iobuf_large_cache_size": 16 00:23:38.826 } 00:23:38.826 }, 00:23:38.826 { 00:23:38.826 "method": "bdev_raid_set_options", 00:23:38.826 "params": { 00:23:38.826 "process_window_size_kb": 1024, 00:23:38.826 "process_max_bandwidth_mb_sec": 0 00:23:38.826 } 00:23:38.826 }, 00:23:38.826 { 00:23:38.826 "method": "bdev_iscsi_set_options", 00:23:38.826 "params": { 00:23:38.826 "timeout_sec": 30 00:23:38.826 } 00:23:38.826 }, 00:23:38.826 { 00:23:38.826 "method": "bdev_nvme_set_options", 00:23:38.826 "params": { 00:23:38.826 "action_on_timeout": "none", 00:23:38.826 "timeout_us": 0, 00:23:38.826 "timeout_admin_us": 0, 00:23:38.826 "keep_alive_timeout_ms": 10000, 00:23:38.826 "arbitration_burst": 0, 00:23:38.826 "low_priority_weight": 0, 00:23:38.826 "medium_priority_weight": 0, 00:23:38.826 "high_priority_weight": 0, 00:23:38.826 "nvme_adminq_poll_period_us": 10000, 00:23:38.826 "nvme_ioq_poll_period_us": 0, 00:23:38.826 "io_queue_requests": 512, 00:23:38.826 "delay_cmd_submit": true, 00:23:38.826 "transport_retry_count": 4, 00:23:38.826 "bdev_retry_count": 3, 00:23:38.826 "transport_ack_timeout": 0, 00:23:38.826 "ctrlr_loss_timeout_sec": 0, 00:23:38.826 "reconnect_delay_sec": 0, 00:23:38.826 "fast_io_fail_timeout_sec": 0, 00:23:38.826 "disable_auto_failback": false, 00:23:38.826 "generate_uuids": false, 00:23:38.826 "transport_tos": 0, 00:23:38.826 "nvme_error_stat": false, 00:23:38.826 "rdma_srq_size": 0, 00:23:38.826 "io_path_stat": false, 00:23:38.826 "allow_accel_sequence": false, 00:23:38.826 "rdma_max_cq_size": 0, 00:23:38.826 "rdma_cm_event_timeout_ms": 0, 00:23:38.826 "dhchap_digests": [ 00:23:38.826 "sha256", 00:23:38.826 "sha384", 00:23:38.826 "sha512" 00:23:38.826 ], 00:23:38.826 "dhchap_dhgroups": [ 00:23:38.826 "null", 00:23:38.826 "ffdhe2048", 00:23:38.826 "ffdhe3072", 00:23:38.826 "ffdhe4096", 00:23:38.826 "ffdhe6144", 00:23:38.826 "ffdhe8192" 00:23:38.826 ] 00:23:38.826 } 00:23:38.826 }, 00:23:38.826 { 00:23:38.826 "method": "bdev_nvme_attach_controller", 00:23:38.826 "params": { 00:23:38.826 "name": "TLSTEST", 00:23:38.826 "trtype": "TCP", 00:23:38.826 "adrfam": "IPv4", 00:23:38.826 "traddr": "10.0.0.2", 00:23:38.826 "trsvcid": "4420", 00:23:38.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.826 "prchk_reftag": false, 00:23:38.826 "prchk_guard": false, 00:23:38.826 "ctrlr_loss_timeout_sec": 0, 00:23:38.826 "reconnect_delay_sec": 0, 00:23:38.826 "fast_io_fail_timeout_sec": 0, 00:23:38.826 "psk": "key0", 00:23:38.826 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:38.826 "hdgst": false, 00:23:38.826 "ddgst": false, 00:23:38.826 "multipath": "multipath" 00:23:38.826 } 00:23:38.826 }, 00:23:38.826 { 00:23:38.826 "method": "bdev_nvme_set_hotplug", 00:23:38.826 "params": { 00:23:38.826 "period_us": 100000, 00:23:38.826 "enable": false 00:23:38.826 } 00:23:38.826 }, 00:23:38.826 { 00:23:38.826 "method": "bdev_wait_for_examine" 00:23:38.826 } 00:23:38.826 ] 00:23:38.826 }, 00:23:38.826 { 00:23:38.826 "subsystem": "nbd", 00:23:38.826 "config": [] 00:23:38.826 } 00:23:38.826 ] 00:23:38.826 }' 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2356019 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2356019 ']' 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2356019 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2356019 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2356019' 00:23:38.826 killing process with pid 2356019 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2356019 00:23:38.826 Received shutdown signal, test time was about 10.000000 seconds 00:23:38.826 00:23:38.826 Latency(us) 00:23:38.826 [2024-10-28T03:59:29.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.826 [2024-10-28T03:59:29.422Z] =================================================================================================================== 00:23:38.826 [2024-10-28T03:59:29.422Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2356019 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2355646 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2355646 ']' 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2355646 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2355646 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:38.826 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:38.827 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2355646' 00:23:38.827 killing process with pid 2355646 00:23:38.827 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2355646 00:23:38.827 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2355646 00:23:39.087 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:39.087 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:39.087 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:39.087 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:39.087 "subsystems": [ 00:23:39.087 { 00:23:39.087 "subsystem": "keyring", 00:23:39.087 "config": [ 00:23:39.087 { 00:23:39.087 "method": "keyring_file_add_key", 00:23:39.087 "params": { 00:23:39.087 "name": "key0", 00:23:39.087 "path": "/tmp/tmp.BVlPff9GKA" 00:23:39.087 } 00:23:39.087 } 00:23:39.087 ] 00:23:39.087 }, 00:23:39.087 { 00:23:39.087 "subsystem": "iobuf", 00:23:39.087 "config": [ 00:23:39.087 { 00:23:39.087 "method": "iobuf_set_options", 00:23:39.087 "params": { 00:23:39.087 "small_pool_count": 8192, 00:23:39.087 "large_pool_count": 1024, 00:23:39.087 "small_bufsize": 8192, 00:23:39.087 "large_bufsize": 135168, 00:23:39.087 "enable_numa": false 00:23:39.087 } 00:23:39.087 } 00:23:39.087 ] 00:23:39.087 }, 00:23:39.087 { 00:23:39.087 "subsystem": "sock", 00:23:39.087 "config": [ 00:23:39.087 { 00:23:39.087 "method": "sock_set_default_impl", 00:23:39.087 "params": { 00:23:39.087 "impl_name": "posix" 00:23:39.087 } 00:23:39.087 }, 00:23:39.087 { 00:23:39.087 "method": "sock_impl_set_options", 00:23:39.087 "params": { 00:23:39.087 "impl_name": "ssl", 00:23:39.087 "recv_buf_size": 4096, 00:23:39.087 "send_buf_size": 4096, 00:23:39.087 "enable_recv_pipe": true, 00:23:39.087 "enable_quickack": false, 00:23:39.087 "enable_placement_id": 0, 00:23:39.087 "enable_zerocopy_send_server": true, 00:23:39.087 "enable_zerocopy_send_client": false, 00:23:39.087 "zerocopy_threshold": 0, 00:23:39.087 "tls_version": 0, 00:23:39.087 "enable_ktls": false 00:23:39.087 } 00:23:39.087 }, 00:23:39.087 { 00:23:39.087 "method": "sock_impl_set_options", 00:23:39.087 "params": { 00:23:39.087 "impl_name": "posix", 00:23:39.087 "recv_buf_size": 2097152, 00:23:39.087 "send_buf_size": 2097152, 00:23:39.087 "enable_recv_pipe": true, 00:23:39.087 "enable_quickack": false, 00:23:39.087 "enable_placement_id": 0, 00:23:39.087 "enable_zerocopy_send_server": true, 00:23:39.087 "enable_zerocopy_send_client": false, 00:23:39.087 "zerocopy_threshold": 0, 00:23:39.087 "tls_version": 0, 00:23:39.087 "enable_ktls": false 00:23:39.087 } 00:23:39.087 } 00:23:39.087 ] 00:23:39.087 }, 00:23:39.087 { 00:23:39.087 "subsystem": "vmd", 00:23:39.087 "config": [] 00:23:39.087 }, 00:23:39.087 { 00:23:39.087 "subsystem": "accel", 00:23:39.087 "config": [ 00:23:39.087 { 00:23:39.087 "method": "accel_set_options", 00:23:39.087 "params": { 00:23:39.087 "small_cache_size": 128, 00:23:39.087 "large_cache_size": 16, 00:23:39.087 "task_count": 2048, 00:23:39.087 "sequence_count": 2048, 00:23:39.087 "buf_count": 2048 00:23:39.087 } 00:23:39.087 } 00:23:39.087 ] 00:23:39.087 }, 00:23:39.087 { 00:23:39.087 "subsystem": "bdev", 00:23:39.087 "config": [ 00:23:39.087 { 00:23:39.087 "method": "bdev_set_options", 00:23:39.087 "params": { 00:23:39.087 "bdev_io_pool_size": 65535, 00:23:39.087 "bdev_io_cache_size": 256, 00:23:39.087 "bdev_auto_examine": true, 00:23:39.087 "iobuf_small_cache_size": 128, 00:23:39.087 "iobuf_large_cache_size": 16 00:23:39.087 } 00:23:39.087 }, 00:23:39.087 { 00:23:39.087 "method": "bdev_raid_set_options", 00:23:39.087 "params": { 00:23:39.087 "process_window_size_kb": 1024, 00:23:39.087 "process_max_bandwidth_mb_sec": 0 00:23:39.087 } 00:23:39.087 }, 00:23:39.087 { 00:23:39.087 "method": "bdev_iscsi_set_options", 00:23:39.087 "params": { 00:23:39.087 "timeout_sec": 30 00:23:39.087 } 00:23:39.087 }, 00:23:39.087 { 00:23:39.087 "method": "bdev_nvme_set_options", 00:23:39.087 "params": { 00:23:39.087 "action_on_timeout": "none", 00:23:39.087 "timeout_us": 0, 00:23:39.087 "timeout_admin_us": 0, 00:23:39.087 "keep_alive_timeout_ms": 10000, 00:23:39.087 "arbitration_burst": 0, 00:23:39.087 "low_priority_weight": 0, 00:23:39.087 "medium_priority_weight": 0, 00:23:39.087 "high_priority_weight": 0, 00:23:39.087 "nvme_adminq_poll_period_us": 10000, 00:23:39.087 "nvme_ioq_poll_period_us": 0, 00:23:39.087 "io_queue_requests": 0, 00:23:39.087 "delay_cmd_submit": true, 00:23:39.087 "transport_retry_count": 4, 00:23:39.088 "bdev_retry_count": 3, 00:23:39.088 "transport_ack_timeout": 0, 00:23:39.088 "ctrlr_loss_timeout_sec": 0, 00:23:39.088 "reconnect_delay_sec": 0, 00:23:39.088 "fast_io_fail_timeout_sec": 0, 00:23:39.088 "disable_auto_failback": false, 00:23:39.088 "generate_uuids": false, 00:23:39.088 "transport_tos": 0, 00:23:39.088 "nvme_error_stat": false, 00:23:39.088 "rdma_srq_size": 0, 00:23:39.088 "io_path_stat": false, 00:23:39.088 "allow_accel_sequence": false, 00:23:39.088 "rdma_max_cq_size": 0, 00:23:39.088 "rdma_cm_event_timeout_ms": 0, 00:23:39.088 "dhchap_digests": [ 00:23:39.088 "sha256", 00:23:39.088 "sha384", 00:23:39.088 "sha512" 00:23:39.088 ], 00:23:39.088 "dhchap_dhgroups": [ 00:23:39.088 "null", 00:23:39.088 "ffdhe2048", 00:23:39.088 "ffdhe3072", 00:23:39.088 "ffdhe4096", 00:23:39.088 "ffdhe6144", 00:23:39.088 "ffdhe8192" 00:23:39.088 ] 00:23:39.088 } 00:23:39.088 }, 00:23:39.088 { 00:23:39.088 "method": "bdev_nvme_set_hotplug", 00:23:39.088 "params": { 00:23:39.088 "period_us": 100000, 00:23:39.088 "enable": false 00:23:39.088 } 00:23:39.088 }, 00:23:39.088 { 00:23:39.088 "method": "bdev_malloc_create", 00:23:39.088 "params": { 00:23:39.088 "name": "malloc0", 00:23:39.088 "num_blocks": 8192, 00:23:39.088 "block_size": 4096, 00:23:39.088 "physical_block_size": 4096, 00:23:39.088 "uuid": "20ab89c8-30f0-4c01-84c0-463ffb552448", 00:23:39.088 "optimal_io_boundary": 0, 00:23:39.088 "md_size": 0, 00:23:39.088 "dif_type": 0, 00:23:39.088 "dif_is_head_of_md": false, 00:23:39.088 "dif_pi_format": 0 00:23:39.088 } 00:23:39.088 }, 00:23:39.088 { 00:23:39.088 "method": "bdev_wait_for_examine" 00:23:39.088 } 00:23:39.088 ] 00:23:39.088 }, 00:23:39.088 { 00:23:39.088 "subsystem": "nbd", 00:23:39.088 "config": [] 00:23:39.088 }, 00:23:39.088 { 00:23:39.088 "subsystem": "scheduler", 00:23:39.088 "config": [ 00:23:39.088 { 00:23:39.088 "method": "framework_set_scheduler", 00:23:39.088 "params": { 00:23:39.088 "name": "static" 00:23:39.088 } 00:23:39.088 } 00:23:39.088 ] 00:23:39.088 }, 00:23:39.088 { 00:23:39.088 "subsystem": "nvmf", 00:23:39.088 "config": [ 00:23:39.088 { 00:23:39.088 "method": "nvmf_set_config", 00:23:39.088 "params": { 00:23:39.088 "discovery_filter": "match_any", 00:23:39.088 "admin_cmd_passthru": { 00:23:39.088 "identify_ctrlr": false 00:23:39.088 }, 00:23:39.088 "dhchap_digests": [ 00:23:39.088 "sha256", 00:23:39.088 "sha384", 00:23:39.088 "sha512" 00:23:39.088 ], 00:23:39.088 "dhchap_dhgroups": [ 00:23:39.088 "null", 00:23:39.088 "ffdhe2048", 00:23:39.088 "ffdhe3072", 00:23:39.088 "ffdhe4096", 00:23:39.088 "ffdhe6144", 00:23:39.088 "ffdhe8192" 00:23:39.088 ] 00:23:39.088 } 00:23:39.088 }, 00:23:39.088 { 00:23:39.088 "method": "nvmf_set_max_subsystems", 00:23:39.088 "params": { 00:23:39.088 "max_subsystems": 1024 00:23:39.088 } 00:23:39.088 }, 00:23:39.088 { 00:23:39.088 "method": "nvmf_set_crdt", 00:23:39.088 "params": { 00:23:39.088 "crdt1": 0, 00:23:39.088 "crdt2": 0, 00:23:39.088 "crdt3": 0 00:23:39.088 } 00:23:39.088 }, 00:23:39.088 { 00:23:39.088 "method": "nvmf_create_transport", 00:23:39.088 "params": { 00:23:39.088 "trtype": "TCP", 00:23:39.088 "max_queue_depth": 128, 00:23:39.088 "max_io_qpairs_per_ctrlr": 127, 00:23:39.088 "in_capsule_data_size": 4096, 00:23:39.088 "max_io_size": 131072, 00:23:39.088 "io_unit_size": 131072, 00:23:39.088 "max_aq_depth": 128, 00:23:39.088 "num_shared_buffers": 511, 00:23:39.088 "buf_cache_size": 4294967295, 00:23:39.088 "dif_insert_or_strip": false, 00:23:39.088 "zcopy": false, 00:23:39.088 "c2h_success": false, 00:23:39.088 "sock_priority": 0, 00:23:39.088 "abort_timeout_sec": 1, 00:23:39.088 "ack_timeout": 0, 00:23:39.088 "data_wr_pool_size": 0 00:23:39.088 } 00:23:39.088 }, 00:23:39.088 { 00:23:39.088 "method": "nvmf_create_subsystem", 00:23:39.088 "params": { 00:23:39.088 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.088 "allow_any_host": false, 00:23:39.088 "serial_number": "SPDK00000000000001", 00:23:39.088 "model_number": "SPDK bdev Controller", 00:23:39.088 "max_namespaces": 10, 00:23:39.088 "min_cntlid": 1, 00:23:39.088 "max_cntlid": 65519, 00:23:39.088 "ana_reporting": false 00:23:39.088 } 00:23:39.088 }, 00:23:39.088 { 00:23:39.088 "method": "nvmf_subsystem_add_host", 00:23:39.088 "params": { 00:23:39.088 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.088 "host": "nqn.2016-06.io.spdk:host1", 00:23:39.088 "psk": "key0" 00:23:39.088 } 00:23:39.088 }, 00:23:39.088 { 00:23:39.088 "method": "nvmf_subsystem_add_ns", 00:23:39.088 "params": { 00:23:39.088 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.088 "namespace": { 00:23:39.088 "nsid": 1, 00:23:39.088 "bdev_name": "malloc0", 00:23:39.088 "nguid": "20AB89C830F04C0184C0463FFB552448", 00:23:39.088 "uuid": "20ab89c8-30f0-4c01-84c0-463ffb552448", 00:23:39.088 "no_auto_visible": false 00:23:39.088 } 00:23:39.088 } 00:23:39.088 }, 00:23:39.088 { 00:23:39.088 "method": "nvmf_subsystem_add_listener", 00:23:39.088 "params": { 00:23:39.088 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.088 "listen_address": { 00:23:39.088 "trtype": "TCP", 00:23:39.088 "adrfam": "IPv4", 00:23:39.088 "traddr": "10.0.0.2", 00:23:39.088 "trsvcid": "4420" 00:23:39.088 }, 00:23:39.088 "secure_channel": true 00:23:39.088 } 00:23:39.088 } 00:23:39.088 ] 00:23:39.088 } 00:23:39.088 ] 00:23:39.088 }' 00:23:39.088 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.088 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2356341 00:23:39.088 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:39.088 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2356341 00:23:39.088 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2356341 ']' 00:23:39.088 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.089 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:39.089 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.089 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:39.089 04:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.089 [2024-10-28 04:59:29.668428] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:23:39.089 [2024-10-28 04:59:29.668503] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.347 [2024-10-28 04:59:29.806803] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:39.347 [2024-10-28 04:59:29.843111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.347 [2024-10-28 04:59:29.891014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.347 [2024-10-28 04:59:29.891090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.347 [2024-10-28 04:59:29.891107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.347 [2024-10-28 04:59:29.891121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.347 [2024-10-28 04:59:29.891133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.347 [2024-10-28 04:59:29.891846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.606 [2024-10-28 04:59:30.143616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.606 [2024-10-28 04:59:30.175553] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:39.606 [2024-10-28 04:59:30.175844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.174 04:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:40.174 04:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:40.174 04:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:40.174 04:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:40.174 04:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.174 04:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.174 04:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2356490 00:23:40.174 04:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2356490 /var/tmp/bdevperf.sock 00:23:40.174 04:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2356490 ']' 00:23:40.174 04:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.174 04:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:40.174 04:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:40.174 04:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.174 04:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:40.174 04:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:40.174 "subsystems": [ 00:23:40.174 { 00:23:40.174 "subsystem": "keyring", 00:23:40.174 "config": [ 00:23:40.174 { 00:23:40.174 "method": "keyring_file_add_key", 00:23:40.174 "params": { 00:23:40.174 "name": "key0", 00:23:40.174 "path": "/tmp/tmp.BVlPff9GKA" 00:23:40.174 } 00:23:40.174 } 00:23:40.174 ] 00:23:40.174 }, 00:23:40.174 { 00:23:40.174 "subsystem": "iobuf", 00:23:40.174 "config": [ 00:23:40.174 { 00:23:40.174 "method": "iobuf_set_options", 00:23:40.174 "params": { 00:23:40.174 "small_pool_count": 8192, 00:23:40.174 "large_pool_count": 1024, 00:23:40.174 "small_bufsize": 8192, 00:23:40.174 "large_bufsize": 135168, 00:23:40.174 "enable_numa": false 00:23:40.174 } 00:23:40.174 } 00:23:40.174 ] 00:23:40.174 }, 00:23:40.174 { 00:23:40.174 "subsystem": "sock", 00:23:40.174 "config": [ 00:23:40.174 { 00:23:40.174 "method": "sock_set_default_impl", 00:23:40.174 "params": { 00:23:40.174 "impl_name": "posix" 00:23:40.174 } 00:23:40.174 }, 00:23:40.174 { 00:23:40.174 "method": "sock_impl_set_options", 00:23:40.174 "params": { 00:23:40.174 "impl_name": "ssl", 00:23:40.174 "recv_buf_size": 4096, 00:23:40.174 "send_buf_size": 4096, 00:23:40.174 "enable_recv_pipe": true, 00:23:40.174 "enable_quickack": false, 00:23:40.174 "enable_placement_id": 0, 00:23:40.174 "enable_zerocopy_send_server": true, 00:23:40.174 "enable_zerocopy_send_client": false, 00:23:40.174 "zerocopy_threshold": 0, 00:23:40.174 "tls_version": 0, 00:23:40.174 "enable_ktls": false 00:23:40.174 } 00:23:40.174 }, 00:23:40.174 { 00:23:40.174 "method": "sock_impl_set_options", 00:23:40.174 "params": { 00:23:40.175 "impl_name": "posix", 00:23:40.175 "recv_buf_size": 2097152, 00:23:40.175 "send_buf_size": 2097152, 00:23:40.175 "enable_recv_pipe": true, 00:23:40.175 "enable_quickack": false, 00:23:40.175 "enable_placement_id": 0, 00:23:40.175 "enable_zerocopy_send_server": true, 00:23:40.175 "enable_zerocopy_send_client": false, 00:23:40.175 "zerocopy_threshold": 0, 00:23:40.175 "tls_version": 0, 00:23:40.175 "enable_ktls": false 00:23:40.175 } 00:23:40.175 } 00:23:40.175 ] 00:23:40.175 }, 00:23:40.175 { 00:23:40.175 "subsystem": "vmd", 00:23:40.175 "config": [] 00:23:40.175 }, 00:23:40.175 { 00:23:40.175 "subsystem": "accel", 00:23:40.175 "config": [ 00:23:40.175 { 00:23:40.175 "method": "accel_set_options", 00:23:40.175 "params": { 00:23:40.175 "small_cache_size": 128, 00:23:40.175 "large_cache_size": 16, 00:23:40.175 "task_count": 2048, 00:23:40.175 "sequence_count": 2048, 00:23:40.175 "buf_count": 2048 00:23:40.175 } 00:23:40.175 } 00:23:40.175 ] 00:23:40.175 }, 00:23:40.175 { 00:23:40.175 "subsystem": "bdev", 00:23:40.175 "config": [ 00:23:40.175 { 00:23:40.175 "method": "bdev_set_options", 00:23:40.175 "params": { 00:23:40.175 "bdev_io_pool_size": 65535, 00:23:40.175 "bdev_io_cache_size": 256, 00:23:40.175 "bdev_auto_examine": true, 00:23:40.175 "iobuf_small_cache_size": 128, 00:23:40.175 "iobuf_large_cache_size": 16 00:23:40.175 } 00:23:40.175 }, 00:23:40.175 { 00:23:40.175 "method": "bdev_raid_set_options", 00:23:40.175 "params": { 00:23:40.175 "process_window_size_kb": 1024, 00:23:40.175 "process_max_bandwidth_mb_sec": 0 00:23:40.175 } 00:23:40.175 }, 00:23:40.175 { 00:23:40.175 "method": "bdev_iscsi_set_options", 00:23:40.175 "params": { 00:23:40.175 "timeout_sec": 30 00:23:40.175 } 00:23:40.175 }, 00:23:40.175 { 00:23:40.175 "method": "bdev_nvme_set_options", 00:23:40.175 "params": { 00:23:40.175 "action_on_timeout": "none", 00:23:40.175 "timeout_us": 0, 00:23:40.175 "timeout_admin_us": 0, 00:23:40.175 "keep_alive_timeout_ms": 10000, 00:23:40.175 "arbitration_burst": 0, 00:23:40.175 "low_priority_weight": 0, 00:23:40.175 "medium_priority_weight": 0, 00:23:40.175 "high_priority_weight": 0, 00:23:40.175 "nvme_adminq_poll_period_us": 10000, 00:23:40.175 "nvme_ioq_poll_period_us": 0, 00:23:40.175 "io_queue_requests": 512, 00:23:40.175 "delay_cmd_submit": true, 00:23:40.175 "transport_retry_count": 4, 00:23:40.175 "bdev_retry_count": 3, 00:23:40.175 "transport_ack_timeout": 0, 00:23:40.175 "ctrlr_loss_timeout_sec": 0, 00:23:40.175 "reconnect_delay_sec": 0, 00:23:40.175 "fast_io_fail_timeout_sec": 0, 00:23:40.175 "disable_auto_failback": false, 00:23:40.175 "generate_uuids": false, 00:23:40.175 "transport_tos": 0, 00:23:40.175 "nvme_error_stat": false, 00:23:40.175 "rdma_srq_size": 0, 00:23:40.175 "io_path_stat": false, 00:23:40.175 "allow_accel_sequence": false, 00:23:40.175 "rdma_max_cq_size": 0, 00:23:40.175 "rdma_cm_event_timeout_ms": 0 04:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.175 , 00:23:40.175 "dhchap_digests": [ 00:23:40.175 "sha256", 00:23:40.175 "sha384", 00:23:40.175 "sha512" 00:23:40.175 ], 00:23:40.175 "dhchap_dhgroups": [ 00:23:40.175 "null", 00:23:40.175 "ffdhe2048", 00:23:40.175 "ffdhe3072", 00:23:40.175 "ffdhe4096", 00:23:40.175 "ffdhe6144", 00:23:40.175 "ffdhe8192" 00:23:40.175 ] 00:23:40.175 } 00:23:40.175 }, 00:23:40.175 { 00:23:40.175 "method": "bdev_nvme_attach_controller", 00:23:40.175 "params": { 00:23:40.175 "name": "TLSTEST", 00:23:40.175 "trtype": "TCP", 00:23:40.175 "adrfam": "IPv4", 00:23:40.175 "traddr": "10.0.0.2", 00:23:40.175 "trsvcid": "4420", 00:23:40.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.175 "prchk_reftag": false, 00:23:40.175 "prchk_guard": false, 00:23:40.175 "ctrlr_loss_timeout_sec": 0, 00:23:40.175 "reconnect_delay_sec": 0, 00:23:40.175 "fast_io_fail_timeout_sec": 0, 00:23:40.175 "psk": "key0", 00:23:40.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.175 "hdgst": false, 00:23:40.175 "ddgst": false, 00:23:40.175 "multipath": "multipath" 00:23:40.175 } 00:23:40.175 }, 00:23:40.175 { 00:23:40.175 "method": "bdev_nvme_set_hotplug", 00:23:40.175 "params": { 00:23:40.175 "period_us": 100000, 00:23:40.175 "enable": false 00:23:40.175 } 00:23:40.175 }, 00:23:40.175 { 00:23:40.175 "method": "bdev_wait_for_examine" 00:23:40.175 } 00:23:40.175 ] 00:23:40.175 }, 00:23:40.175 { 00:23:40.175 "subsystem": "nbd", 00:23:40.175 "config": [] 00:23:40.175 } 00:23:40.175 ] 00:23:40.175 }' 00:23:40.175 [2024-10-28 04:59:30.715245] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:23:40.175 [2024-10-28 04:59:30.715338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356490 ] 00:23:40.436 [2024-10-28 04:59:30.848382] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:40.436 [2024-10-28 04:59:30.884048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.436 [2024-10-28 04:59:30.929247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.698 [2024-10-28 04:59:31.106720] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.265 04:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:41.265 04:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:41.265 04:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:41.525 Running I/O for 10 seconds... 00:23:43.402 3463.00 IOPS, 13.53 MiB/s [2024-10-28T03:59:34.933Z] 3510.50 IOPS, 13.71 MiB/s [2024-10-28T03:59:36.309Z] 3562.00 IOPS, 13.91 MiB/s [2024-10-28T03:59:37.246Z] 3563.25 IOPS, 13.92 MiB/s [2024-10-28T03:59:38.178Z] 3557.60 IOPS, 13.90 MiB/s [2024-10-28T03:59:39.115Z] 3554.00 IOPS, 13.88 MiB/s [2024-10-28T03:59:40.054Z] 3541.00 IOPS, 13.83 MiB/s [2024-10-28T03:59:40.991Z] 3553.62 IOPS, 13.88 MiB/s [2024-10-28T03:59:41.927Z] 3554.78 IOPS, 13.89 MiB/s [2024-10-28T03:59:41.927Z] 3555.80 IOPS, 13.89 MiB/s 00:23:51.331 Latency(us) 00:23:51.331 [2024-10-28T03:59:41.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.331 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:51.331 Verification LBA range: start 0x0 length 0x2000 00:23:51.331 TLSTESTn1 : 10.03 3559.60 13.90 0.00 0.00 35890.29 8905.21 37956.62 00:23:51.331 [2024-10-28T03:59:41.927Z] =================================================================================================================== 00:23:51.331 [2024-10-28T03:59:41.927Z] Total : 3559.60 13.90 0.00 0.00 35890.29 8905.21 37956.62 00:23:51.331 { 00:23:51.331 "results": [ 00:23:51.331 { 00:23:51.331 "job": "TLSTESTn1", 00:23:51.331 "core_mask": "0x4", 00:23:51.331 "workload": "verify", 00:23:51.331 "status": "finished", 00:23:51.331 "verify_range": { 00:23:51.331 "start": 0, 00:23:51.331 "length": 8192 00:23:51.331 }, 00:23:51.331 "queue_depth": 128, 00:23:51.331 "io_size": 4096, 00:23:51.331 "runtime": 10.025009, 00:23:51.331 "iops": 3559.597801857335, 00:23:51.331 "mibps": 13.904678913505215, 00:23:51.331 "io_failed": 0, 00:23:51.331 "io_timeout": 0, 00:23:51.331 "avg_latency_us": 35890.287284244325, 00:23:51.331 "min_latency_us": 8905.207351030258, 00:23:51.331 "max_latency_us": 37956.621496194544 00:23:51.331 } 00:23:51.331 ], 00:23:51.331 "core_count": 1 00:23:51.331 } 00:23:51.331 04:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.331 04:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2356490 00:23:51.331 04:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2356490 ']' 00:23:51.331 04:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2356490 00:23:51.331 04:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:51.331 04:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:51.331 04:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2356490 00:23:51.591 04:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:51.591 04:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:51.591 04:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2356490' 00:23:51.591 killing process with pid 2356490 00:23:51.591 04:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2356490 00:23:51.591 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.591 00:23:51.591 Latency(us) 00:23:51.591 [2024-10-28T03:59:42.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.591 [2024-10-28T03:59:42.187Z] =================================================================================================================== 00:23:51.591 [2024-10-28T03:59:42.187Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.591 04:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2356490 00:23:51.591 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2356341 00:23:51.591 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2356341 ']' 00:23:51.592 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2356341 00:23:51.592 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:51.592 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:51.592 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2356341 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2356341' 00:23:51.852 killing process with pid 2356341 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2356341 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2356341 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2357787 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2357787 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2357787 ']' 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.852 04:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.113 [2024-10-28 04:59:42.461404] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:23:52.113 [2024-10-28 04:59:42.461501] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.113 [2024-10-28 04:59:42.600111] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:52.113 [2024-10-28 04:59:42.634745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.113 [2024-10-28 04:59:42.680939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.113 [2024-10-28 04:59:42.681011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.113 [2024-10-28 04:59:42.681028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.113 [2024-10-28 04:59:42.681042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.113 [2024-10-28 04:59:42.681054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.113 [2024-10-28 04:59:42.681765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.050 04:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.051 04:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:53.051 04:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:53.051 04:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.051 04:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.051 04:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.051 04:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.BVlPff9GKA 00:23:53.051 04:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BVlPff9GKA 00:23:53.051 04:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:53.309 [2024-10-28 04:59:43.725475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.309 04:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:53.566 04:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:53.825 [2024-10-28 04:59:44.265623] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.825 [2024-10-28 04:59:44.265884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.825 04:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:54.082 malloc0 00:23:54.082 04:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:54.341 04:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BVlPff9GKA 00:23:54.598 04:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:54.857 04:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2358196 00:23:54.857 04:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:54.857 04:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.857 04:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2358196 /var/tmp/bdevperf.sock 00:23:54.857 04:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2358196 ']' 00:23:54.857 04:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.857 04:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.857 04:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.857 04:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.857 04:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.116 [2024-10-28 04:59:45.453474] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:23:55.116 [2024-10-28 04:59:45.453555] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358196 ] 00:23:55.116 [2024-10-28 04:59:45.586054] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:55.116 [2024-10-28 04:59:45.621560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.116 [2024-10-28 04:59:45.669820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.051 04:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.051 04:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:56.051 04:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BVlPff9GKA 00:23:56.309 04:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:56.567 [2024-10-28 04:59:46.981137] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.567 nvme0n1 00:23:56.567 04:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:56.827 Running I/O for 1 seconds... 00:23:57.765 3357.00 IOPS, 13.11 MiB/s 00:23:57.765 Latency(us) 00:23:57.765 [2024-10-28T03:59:48.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.765 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:57.765 Verification LBA range: start 0x0 length 0x2000 00:23:57.765 nvme0n1 : 1.02 3400.43 13.28 0.00 0.00 37208.14 6910.05 42238.91 00:23:57.765 [2024-10-28T03:59:48.361Z] =================================================================================================================== 00:23:57.765 [2024-10-28T03:59:48.361Z] Total : 3400.43 13.28 0.00 0.00 37208.14 6910.05 42238.91 00:23:57.765 { 00:23:57.765 "results": [ 00:23:57.765 { 00:23:57.765 "job": "nvme0n1", 00:23:57.765 "core_mask": "0x2", 00:23:57.765 "workload": "verify", 00:23:57.765 "status": "finished", 00:23:57.765 "verify_range": { 00:23:57.765 "start": 0, 00:23:57.765 "length": 8192 00:23:57.765 }, 00:23:57.765 "queue_depth": 128, 00:23:57.765 "io_size": 4096, 00:23:57.765 "runtime": 1.02487, 00:23:57.765 "iops": 3400.4312742103875, 00:23:57.765 "mibps": 13.282934664884326, 00:23:57.765 "io_failed": 0, 00:23:57.765 "io_timeout": 0, 00:23:57.765 "avg_latency_us": 37208.14311556675, 00:23:57.765 "min_latency_us": 6910.051605717468, 00:23:57.765 "max_latency_us": 42238.90699832931 00:23:57.765 } 00:23:57.765 ], 00:23:57.765 "core_count": 1 00:23:57.765 } 00:23:57.765 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2358196 00:23:57.765 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2358196 ']' 00:23:57.765 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2358196 00:23:57.765 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:57.765 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:57.766 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2358196 00:23:57.766 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:57.766 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:57.766 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2358196' 00:23:57.766 killing process with pid 2358196 00:23:57.766 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2358196 00:23:57.766 Received shutdown signal, test time was about 1.000000 seconds 00:23:57.766 00:23:57.766 Latency(us) 00:23:57.766 [2024-10-28T03:59:48.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.766 [2024-10-28T03:59:48.362Z] =================================================================================================================== 00:23:57.766 [2024-10-28T03:59:48.362Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:57.766 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2358196 00:23:58.024 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2357787 00:23:58.024 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2357787 ']' 00:23:58.024 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2357787 00:23:58.024 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:58.024 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:58.024 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2357787 00:23:58.024 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:58.024 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:58.024 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2357787' 00:23:58.024 killing process with pid 2357787 00:23:58.024 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2357787 00:23:58.024 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2357787 00:23:58.283 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:58.283 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:58.283 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:58.283 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.283 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2358594 00:23:58.283 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:58.283 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2358594 00:23:58.283 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2358594 ']' 00:23:58.283 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.283 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:58.283 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.283 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:58.283 04:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.283 [2024-10-28 04:59:48.737925] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:23:58.283 [2024-10-28 04:59:48.738021] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.283 [2024-10-28 04:59:48.874893] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:58.543 [2024-10-28 04:59:48.916523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.543 [2024-10-28 04:59:48.963724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.543 [2024-10-28 04:59:48.963795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.543 [2024-10-28 04:59:48.963820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.543 [2024-10-28 04:59:48.963841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.543 [2024-10-28 04:59:48.963854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.543 [2024-10-28 04:59:48.964506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.543 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:58.543 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:58.543 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:58.543 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:58.543 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.543 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.543 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:58.543 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.543 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.543 [2024-10-28 04:59:49.111166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.543 malloc0 00:23:58.801 [2024-10-28 04:59:49.143721] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.801 [2024-10-28 04:59:49.144000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.801 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.801 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2358621 00:23:58.801 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:58.801 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2358621 /var/tmp/bdevperf.sock 00:23:58.801 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2358621 ']' 00:23:58.801 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.801 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:58.801 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.801 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:58.801 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.801 [2024-10-28 04:59:49.216795] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:23:58.801 [2024-10-28 04:59:49.216857] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358621 ] 00:23:58.802 [2024-10-28 04:59:49.348469] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:58.802 [2024-10-28 04:59:49.388216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.060 [2024-10-28 04:59:49.439645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.060 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:59.060 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:59.060 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BVlPff9GKA 00:23:59.317 04:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:59.574 [2024-10-28 04:59:50.166660] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:59.834 nvme0n1 00:23:59.834 04:59:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:59.834 Running I/O for 1 seconds... 00:24:01.215 3394.00 IOPS, 13.26 MiB/s 00:24:01.215 Latency(us) 00:24:01.215 [2024-10-28T03:59:51.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.215 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:01.215 Verification LBA range: start 0x0 length 0x2000 00:24:01.215 nvme0n1 : 1.02 3443.33 13.45 0.00 0.00 36787.09 10024.44 38929.87 00:24:01.215 [2024-10-28T03:59:51.811Z] =================================================================================================================== 00:24:01.215 [2024-10-28T03:59:51.811Z] Total : 3443.33 13.45 0.00 0.00 36787.09 10024.44 38929.87 00:24:01.215 { 00:24:01.215 "results": [ 00:24:01.215 { 00:24:01.215 "job": "nvme0n1", 00:24:01.215 "core_mask": "0x2", 00:24:01.215 "workload": "verify", 00:24:01.215 "status": "finished", 00:24:01.215 "verify_range": { 00:24:01.215 "start": 0, 00:24:01.215 "length": 8192 00:24:01.215 }, 00:24:01.215 "queue_depth": 128, 00:24:01.215 "io_size": 4096, 00:24:01.215 "runtime": 1.022847, 00:24:01.215 "iops": 3443.33023414059, 00:24:01.215 "mibps": 13.45050872711168, 00:24:01.215 "io_failed": 0, 00:24:01.215 "io_timeout": 0, 00:24:01.215 "avg_latency_us": 36787.08817397172, 00:24:01.215 "min_latency_us": 10024.441061815482, 00:24:01.215 "max_latency_us": 38929.868201225174 00:24:01.215 } 00:24:01.215 ], 00:24:01.215 "core_count": 1 00:24:01.215 } 00:24:01.215 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:01.215 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.215 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.215 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.215 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:01.215 "subsystems": [ 00:24:01.215 { 00:24:01.215 "subsystem": "keyring", 00:24:01.215 "config": [ 00:24:01.215 { 00:24:01.215 "method": "keyring_file_add_key", 00:24:01.215 "params": { 00:24:01.215 "name": "key0", 00:24:01.215 "path": "/tmp/tmp.BVlPff9GKA" 00:24:01.215 } 00:24:01.215 } 00:24:01.215 ] 00:24:01.215 }, 00:24:01.215 { 00:24:01.215 "subsystem": "iobuf", 00:24:01.215 "config": [ 00:24:01.215 { 00:24:01.215 "method": "iobuf_set_options", 00:24:01.215 "params": { 00:24:01.215 "small_pool_count": 8192, 00:24:01.215 "large_pool_count": 1024, 00:24:01.215 "small_bufsize": 8192, 00:24:01.215 "large_bufsize": 135168, 00:24:01.215 "enable_numa": false 00:24:01.215 } 00:24:01.215 } 00:24:01.215 ] 00:24:01.215 }, 00:24:01.215 { 00:24:01.215 "subsystem": "sock", 00:24:01.215 "config": [ 00:24:01.215 { 00:24:01.215 "method": "sock_set_default_impl", 00:24:01.215 "params": { 00:24:01.215 "impl_name": "posix" 00:24:01.215 } 00:24:01.215 }, 00:24:01.215 { 00:24:01.215 "method": "sock_impl_set_options", 00:24:01.215 "params": { 00:24:01.215 "impl_name": "ssl", 00:24:01.215 "recv_buf_size": 4096, 00:24:01.215 "send_buf_size": 4096, 00:24:01.215 "enable_recv_pipe": true, 00:24:01.215 "enable_quickack": false, 00:24:01.215 "enable_placement_id": 0, 00:24:01.215 "enable_zerocopy_send_server": true, 00:24:01.215 "enable_zerocopy_send_client": false, 00:24:01.215 "zerocopy_threshold": 0, 00:24:01.215 "tls_version": 0, 00:24:01.215 "enable_ktls": false 00:24:01.215 } 00:24:01.215 }, 00:24:01.215 { 00:24:01.215 "method": "sock_impl_set_options", 00:24:01.215 "params": { 00:24:01.215 "impl_name": "posix", 00:24:01.216 "recv_buf_size": 2097152, 00:24:01.216 "send_buf_size": 2097152, 00:24:01.216 "enable_recv_pipe": true, 00:24:01.216 "enable_quickack": false, 00:24:01.216 "enable_placement_id": 0, 00:24:01.216 "enable_zerocopy_send_server": true, 00:24:01.216 "enable_zerocopy_send_client": false, 00:24:01.216 "zerocopy_threshold": 0, 00:24:01.216 "tls_version": 0, 00:24:01.216 "enable_ktls": false 00:24:01.216 } 00:24:01.216 } 00:24:01.216 ] 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "subsystem": "vmd", 00:24:01.216 "config": [] 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "subsystem": "accel", 00:24:01.216 "config": [ 00:24:01.216 { 00:24:01.216 "method": "accel_set_options", 00:24:01.216 "params": { 00:24:01.216 "small_cache_size": 128, 00:24:01.216 "large_cache_size": 16, 00:24:01.216 "task_count": 2048, 00:24:01.216 "sequence_count": 2048, 00:24:01.216 "buf_count": 2048 00:24:01.216 } 00:24:01.216 } 00:24:01.216 ] 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "subsystem": "bdev", 00:24:01.216 "config": [ 00:24:01.216 { 00:24:01.216 "method": "bdev_set_options", 00:24:01.216 "params": { 00:24:01.216 "bdev_io_pool_size": 65535, 00:24:01.216 "bdev_io_cache_size": 256, 00:24:01.216 "bdev_auto_examine": true, 00:24:01.216 "iobuf_small_cache_size": 128, 00:24:01.216 "iobuf_large_cache_size": 16 00:24:01.216 } 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "method": "bdev_raid_set_options", 00:24:01.216 "params": { 00:24:01.216 "process_window_size_kb": 1024, 00:24:01.216 "process_max_bandwidth_mb_sec": 0 00:24:01.216 } 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "method": "bdev_iscsi_set_options", 00:24:01.216 "params": { 00:24:01.216 "timeout_sec": 30 00:24:01.216 } 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "method": "bdev_nvme_set_options", 00:24:01.216 "params": { 00:24:01.216 "action_on_timeout": "none", 00:24:01.216 "timeout_us": 0, 00:24:01.216 "timeout_admin_us": 0, 00:24:01.216 "keep_alive_timeout_ms": 10000, 00:24:01.216 "arbitration_burst": 0, 00:24:01.216 "low_priority_weight": 0, 00:24:01.216 "medium_priority_weight": 0, 00:24:01.216 "high_priority_weight": 0, 00:24:01.216 "nvme_adminq_poll_period_us": 10000, 00:24:01.216 "nvme_ioq_poll_period_us": 0, 00:24:01.216 "io_queue_requests": 0, 00:24:01.216 "delay_cmd_submit": true, 00:24:01.216 "transport_retry_count": 4, 00:24:01.216 "bdev_retry_count": 3, 00:24:01.216 "transport_ack_timeout": 0, 00:24:01.216 "ctrlr_loss_timeout_sec": 0, 00:24:01.216 "reconnect_delay_sec": 0, 00:24:01.216 "fast_io_fail_timeout_sec": 0, 00:24:01.216 "disable_auto_failback": false, 00:24:01.216 "generate_uuids": false, 00:24:01.216 "transport_tos": 0, 00:24:01.216 "nvme_error_stat": false, 00:24:01.216 "rdma_srq_size": 0, 00:24:01.216 "io_path_stat": false, 00:24:01.216 "allow_accel_sequence": false, 00:24:01.216 "rdma_max_cq_size": 0, 00:24:01.216 "rdma_cm_event_timeout_ms": 0, 00:24:01.216 "dhchap_digests": [ 00:24:01.216 "sha256", 00:24:01.216 "sha384", 00:24:01.216 "sha512" 00:24:01.216 ], 00:24:01.216 "dhchap_dhgroups": [ 00:24:01.216 "null", 00:24:01.216 "ffdhe2048", 00:24:01.216 "ffdhe3072", 00:24:01.216 "ffdhe4096", 00:24:01.216 "ffdhe6144", 00:24:01.216 "ffdhe8192" 00:24:01.216 ] 00:24:01.216 } 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "method": "bdev_nvme_set_hotplug", 00:24:01.216 "params": { 00:24:01.216 "period_us": 100000, 00:24:01.216 "enable": false 00:24:01.216 } 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "method": "bdev_malloc_create", 00:24:01.216 "params": { 00:24:01.216 "name": "malloc0", 00:24:01.216 "num_blocks": 8192, 00:24:01.216 "block_size": 4096, 00:24:01.216 "physical_block_size": 4096, 00:24:01.216 "uuid": "a000b7bf-0b49-433c-9f5b-be4f52cfd023", 00:24:01.216 "optimal_io_boundary": 0, 00:24:01.216 "md_size": 0, 00:24:01.216 "dif_type": 0, 00:24:01.216 "dif_is_head_of_md": false, 00:24:01.216 "dif_pi_format": 0 00:24:01.216 } 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "method": "bdev_wait_for_examine" 00:24:01.216 } 00:24:01.216 ] 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "subsystem": "nbd", 00:24:01.216 "config": [] 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "subsystem": "scheduler", 00:24:01.216 "config": [ 00:24:01.216 { 00:24:01.216 "method": "framework_set_scheduler", 00:24:01.216 "params": { 00:24:01.216 "name": "static" 00:24:01.216 } 00:24:01.216 } 00:24:01.216 ] 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "subsystem": "nvmf", 00:24:01.216 "config": [ 00:24:01.216 { 00:24:01.216 "method": "nvmf_set_config", 00:24:01.216 "params": { 00:24:01.216 "discovery_filter": "match_any", 00:24:01.216 "admin_cmd_passthru": { 00:24:01.216 "identify_ctrlr": false 00:24:01.216 }, 00:24:01.216 "dhchap_digests": [ 00:24:01.216 "sha256", 00:24:01.216 "sha384", 00:24:01.216 "sha512" 00:24:01.216 ], 00:24:01.216 "dhchap_dhgroups": [ 00:24:01.216 "null", 00:24:01.216 "ffdhe2048", 00:24:01.216 "ffdhe3072", 00:24:01.216 "ffdhe4096", 00:24:01.216 "ffdhe6144", 00:24:01.216 "ffdhe8192" 00:24:01.216 ] 00:24:01.216 } 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "method": "nvmf_set_max_subsystems", 00:24:01.216 "params": { 00:24:01.216 "max_subsystems": 1024 00:24:01.216 } 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "method": "nvmf_set_crdt", 00:24:01.216 "params": { 00:24:01.216 "crdt1": 0, 00:24:01.216 "crdt2": 0, 00:24:01.216 "crdt3": 0 00:24:01.216 } 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "method": "nvmf_create_transport", 00:24:01.216 "params": { 00:24:01.216 "trtype": "TCP", 00:24:01.216 "max_queue_depth": 128, 00:24:01.216 "max_io_qpairs_per_ctrlr": 127, 00:24:01.216 "in_capsule_data_size": 4096, 00:24:01.216 "max_io_size": 131072, 00:24:01.216 "io_unit_size": 131072, 00:24:01.216 "max_aq_depth": 128, 00:24:01.216 "num_shared_buffers": 511, 00:24:01.216 "buf_cache_size": 4294967295, 00:24:01.216 "dif_insert_or_strip": false, 00:24:01.216 "zcopy": false, 00:24:01.216 "c2h_success": false, 00:24:01.216 "sock_priority": 0, 00:24:01.216 "abort_timeout_sec": 1, 00:24:01.216 "ack_timeout": 0, 00:24:01.216 "data_wr_pool_size": 0 00:24:01.216 } 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "method": "nvmf_create_subsystem", 00:24:01.216 "params": { 00:24:01.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.216 "allow_any_host": false, 00:24:01.216 "serial_number": "00000000000000000000", 00:24:01.216 "model_number": "SPDK bdev Controller", 00:24:01.216 "max_namespaces": 32, 00:24:01.216 "min_cntlid": 1, 00:24:01.216 "max_cntlid": 65519, 00:24:01.216 "ana_reporting": false 00:24:01.216 } 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "method": "nvmf_subsystem_add_host", 00:24:01.216 "params": { 00:24:01.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.216 "host": "nqn.2016-06.io.spdk:host1", 00:24:01.216 "psk": "key0" 00:24:01.216 } 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "method": "nvmf_subsystem_add_ns", 00:24:01.216 "params": { 00:24:01.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.216 "namespace": { 00:24:01.216 "nsid": 1, 00:24:01.216 "bdev_name": "malloc0", 00:24:01.216 "nguid": "A000B7BF0B49433C9F5BBE4F52CFD023", 00:24:01.216 "uuid": "a000b7bf-0b49-433c-9f5b-be4f52cfd023", 00:24:01.216 "no_auto_visible": false 00:24:01.216 } 00:24:01.216 } 00:24:01.216 }, 00:24:01.216 { 00:24:01.216 "method": "nvmf_subsystem_add_listener", 00:24:01.216 "params": { 00:24:01.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.216 "listen_address": { 00:24:01.216 "trtype": "TCP", 00:24:01.216 "adrfam": "IPv4", 00:24:01.216 "traddr": "10.0.0.2", 00:24:01.216 "trsvcid": "4420" 00:24:01.216 }, 00:24:01.216 "secure_channel": false, 00:24:01.216 "sock_impl": "ssl" 00:24:01.216 } 00:24:01.216 } 00:24:01.216 ] 00:24:01.216 } 00:24:01.216 ] 00:24:01.216 }' 00:24:01.216 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:01.476 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:01.476 "subsystems": [ 00:24:01.476 { 00:24:01.476 "subsystem": "keyring", 00:24:01.476 "config": [ 00:24:01.476 { 00:24:01.476 "method": "keyring_file_add_key", 00:24:01.476 "params": { 00:24:01.476 "name": "key0", 00:24:01.476 "path": "/tmp/tmp.BVlPff9GKA" 00:24:01.476 } 00:24:01.476 } 00:24:01.476 ] 00:24:01.476 }, 00:24:01.476 { 00:24:01.476 "subsystem": "iobuf", 00:24:01.476 "config": [ 00:24:01.476 { 00:24:01.476 "method": "iobuf_set_options", 00:24:01.476 "params": { 00:24:01.476 "small_pool_count": 8192, 00:24:01.476 "large_pool_count": 1024, 00:24:01.476 "small_bufsize": 8192, 00:24:01.476 "large_bufsize": 135168, 00:24:01.476 "enable_numa": false 00:24:01.476 } 00:24:01.476 } 00:24:01.476 ] 00:24:01.477 }, 00:24:01.477 { 00:24:01.477 "subsystem": "sock", 00:24:01.477 "config": [ 00:24:01.477 { 00:24:01.477 "method": "sock_set_default_impl", 00:24:01.477 "params": { 00:24:01.477 "impl_name": "posix" 00:24:01.477 } 00:24:01.477 }, 00:24:01.477 { 00:24:01.477 "method": "sock_impl_set_options", 00:24:01.477 "params": { 00:24:01.477 "impl_name": "ssl", 00:24:01.477 "recv_buf_size": 4096, 00:24:01.477 "send_buf_size": 4096, 00:24:01.477 "enable_recv_pipe": true, 00:24:01.477 "enable_quickack": false, 00:24:01.477 "enable_placement_id": 0, 00:24:01.477 "enable_zerocopy_send_server": true, 00:24:01.477 "enable_zerocopy_send_client": false, 00:24:01.477 "zerocopy_threshold": 0, 00:24:01.477 "tls_version": 0, 00:24:01.477 "enable_ktls": false 00:24:01.477 } 00:24:01.477 }, 00:24:01.477 { 00:24:01.477 "method": "sock_impl_set_options", 00:24:01.477 "params": { 00:24:01.477 "impl_name": "posix", 00:24:01.477 "recv_buf_size": 2097152, 00:24:01.477 "send_buf_size": 2097152, 00:24:01.477 "enable_recv_pipe": true, 00:24:01.477 "enable_quickack": false, 00:24:01.477 "enable_placement_id": 0, 00:24:01.477 "enable_zerocopy_send_server": true, 00:24:01.477 "enable_zerocopy_send_client": false, 00:24:01.477 "zerocopy_threshold": 0, 00:24:01.477 "tls_version": 0, 00:24:01.477 "enable_ktls": false 00:24:01.477 } 00:24:01.477 } 00:24:01.477 ] 00:24:01.477 }, 00:24:01.477 { 00:24:01.477 "subsystem": "vmd", 00:24:01.477 "config": [] 00:24:01.477 }, 00:24:01.477 { 00:24:01.477 "subsystem": "accel", 00:24:01.477 "config": [ 00:24:01.477 { 00:24:01.477 "method": "accel_set_options", 00:24:01.477 "params": { 00:24:01.477 "small_cache_size": 128, 00:24:01.477 "large_cache_size": 16, 00:24:01.477 "task_count": 2048, 00:24:01.477 "sequence_count": 2048, 00:24:01.477 "buf_count": 2048 00:24:01.477 } 00:24:01.477 } 00:24:01.477 ] 00:24:01.477 }, 00:24:01.477 { 00:24:01.477 "subsystem": "bdev", 00:24:01.477 "config": [ 00:24:01.477 { 00:24:01.477 "method": "bdev_set_options", 00:24:01.477 "params": { 00:24:01.477 "bdev_io_pool_size": 65535, 00:24:01.477 "bdev_io_cache_size": 256, 00:24:01.477 "bdev_auto_examine": true, 00:24:01.477 "iobuf_small_cache_size": 128, 00:24:01.477 "iobuf_large_cache_size": 16 00:24:01.477 } 00:24:01.477 }, 00:24:01.477 { 00:24:01.477 "method": "bdev_raid_set_options", 00:24:01.477 "params": { 00:24:01.477 "process_window_size_kb": 1024, 00:24:01.477 "process_max_bandwidth_mb_sec": 0 00:24:01.477 } 00:24:01.477 }, 00:24:01.477 { 00:24:01.477 "method": "bdev_iscsi_set_options", 00:24:01.477 "params": { 00:24:01.477 "timeout_sec": 30 00:24:01.477 } 00:24:01.477 }, 00:24:01.477 { 00:24:01.477 "method": "bdev_nvme_set_options", 00:24:01.477 "params": { 00:24:01.477 "action_on_timeout": "none", 00:24:01.477 "timeout_us": 0, 00:24:01.477 "timeout_admin_us": 0, 00:24:01.477 "keep_alive_timeout_ms": 10000, 00:24:01.477 "arbitration_burst": 0, 00:24:01.477 "low_priority_weight": 0, 00:24:01.477 "medium_priority_weight": 0, 00:24:01.477 "high_priority_weight": 0, 00:24:01.477 "nvme_adminq_poll_period_us": 10000, 00:24:01.477 "nvme_ioq_poll_period_us": 0, 00:24:01.477 "io_queue_requests": 512, 00:24:01.477 "delay_cmd_submit": true, 00:24:01.477 "transport_retry_count": 4, 00:24:01.477 "bdev_retry_count": 3, 00:24:01.477 "transport_ack_timeout": 0, 00:24:01.477 "ctrlr_loss_timeout_sec": 0, 00:24:01.477 "reconnect_delay_sec": 0, 00:24:01.477 "fast_io_fail_timeout_sec": 0, 00:24:01.477 "disable_auto_failback": false, 00:24:01.477 "generate_uuids": false, 00:24:01.477 "transport_tos": 0, 00:24:01.477 "nvme_error_stat": false, 00:24:01.477 "rdma_srq_size": 0, 00:24:01.477 "io_path_stat": false, 00:24:01.477 "allow_accel_sequence": false, 00:24:01.477 "rdma_max_cq_size": 0, 00:24:01.477 "rdma_cm_event_timeout_ms": 0, 00:24:01.477 "dhchap_digests": [ 00:24:01.477 "sha256", 00:24:01.477 "sha384", 00:24:01.477 "sha512" 00:24:01.477 ], 00:24:01.477 "dhchap_dhgroups": [ 00:24:01.477 "null", 00:24:01.477 "ffdhe2048", 00:24:01.477 "ffdhe3072", 00:24:01.477 "ffdhe4096", 00:24:01.477 "ffdhe6144", 00:24:01.477 "ffdhe8192" 00:24:01.477 ] 00:24:01.477 } 00:24:01.477 }, 00:24:01.477 { 00:24:01.477 "method": "bdev_nvme_attach_controller", 00:24:01.477 "params": { 00:24:01.477 "name": "nvme0", 00:24:01.477 "trtype": "TCP", 00:24:01.477 "adrfam": "IPv4", 00:24:01.477 "traddr": "10.0.0.2", 00:24:01.477 "trsvcid": "4420", 00:24:01.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.477 "prchk_reftag": false, 00:24:01.477 "prchk_guard": false, 00:24:01.477 "ctrlr_loss_timeout_sec": 0, 00:24:01.477 "reconnect_delay_sec": 0, 00:24:01.477 "fast_io_fail_timeout_sec": 0, 00:24:01.477 "psk": "key0", 00:24:01.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.477 "hdgst": false, 00:24:01.477 "ddgst": false, 00:24:01.477 "multipath": "multipath" 00:24:01.477 } 00:24:01.477 }, 00:24:01.477 { 00:24:01.477 "method": "bdev_nvme_set_hotplug", 00:24:01.477 "params": { 00:24:01.477 "period_us": 100000, 00:24:01.477 "enable": false 00:24:01.477 } 00:24:01.477 }, 00:24:01.477 { 00:24:01.477 "method": "bdev_enable_histogram", 00:24:01.477 "params": { 00:24:01.477 "name": "nvme0n1", 00:24:01.477 "enable": true 00:24:01.477 } 00:24:01.477 }, 00:24:01.477 { 00:24:01.477 "method": "bdev_wait_for_examine" 00:24:01.477 } 00:24:01.477 ] 00:24:01.477 }, 00:24:01.477 { 00:24:01.477 "subsystem": "nbd", 00:24:01.477 "config": [] 00:24:01.477 } 00:24:01.477 ] 00:24:01.477 }' 00:24:01.477 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2358621 00:24:01.477 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2358621 ']' 00:24:01.477 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2358621 00:24:01.477 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:01.477 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:01.477 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2358621 00:24:01.477 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:01.477 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:01.477 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2358621' 00:24:01.477 killing process with pid 2358621 00:24:01.477 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2358621 00:24:01.477 Received shutdown signal, test time was about 1.000000 seconds 00:24:01.477 00:24:01.477 Latency(us) 00:24:01.477 [2024-10-28T03:59:52.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.477 [2024-10-28T03:59:52.073Z] =================================================================================================================== 00:24:01.477 [2024-10-28T03:59:52.073Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.477 04:59:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2358621 00:24:01.738 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2358594 00:24:01.738 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2358594 ']' 00:24:01.738 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2358594 00:24:01.738 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:01.738 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:01.738 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2358594 00:24:01.738 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:01.738 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:01.738 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2358594' 00:24:01.738 killing process with pid 2358594 00:24:01.738 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2358594 00:24:01.738 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2358594 00:24:01.998 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:01.998 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:01.998 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:01.998 "subsystems": [ 00:24:01.998 { 00:24:01.998 "subsystem": "keyring", 00:24:01.998 "config": [ 00:24:01.998 { 00:24:01.998 "method": "keyring_file_add_key", 00:24:01.998 "params": { 00:24:01.998 "name": "key0", 00:24:01.998 "path": "/tmp/tmp.BVlPff9GKA" 00:24:01.998 } 00:24:01.998 } 00:24:01.998 ] 00:24:01.998 }, 00:24:01.998 { 00:24:01.998 "subsystem": "iobuf", 00:24:01.998 "config": [ 00:24:01.998 { 00:24:01.998 "method": "iobuf_set_options", 00:24:01.998 "params": { 00:24:01.998 "small_pool_count": 8192, 00:24:01.998 "large_pool_count": 1024, 00:24:01.998 "small_bufsize": 8192, 00:24:01.998 "large_bufsize": 135168, 00:24:01.998 "enable_numa": false 00:24:01.998 } 00:24:01.998 } 00:24:01.998 ] 00:24:01.998 }, 00:24:01.998 { 00:24:01.998 "subsystem": "sock", 00:24:01.998 "config": [ 00:24:01.998 { 00:24:01.998 "method": "sock_set_default_impl", 00:24:01.998 "params": { 00:24:01.998 "impl_name": "posix" 00:24:01.998 } 00:24:01.998 }, 00:24:01.998 { 00:24:01.998 "method": "sock_impl_set_options", 00:24:01.998 "params": { 00:24:01.998 "impl_name": "ssl", 00:24:01.998 "recv_buf_size": 4096, 00:24:01.998 "send_buf_size": 4096, 00:24:01.998 "enable_recv_pipe": true, 00:24:01.998 "enable_quickack": false, 00:24:01.998 "enable_placement_id": 0, 00:24:01.998 "enable_zerocopy_send_server": true, 00:24:01.998 "enable_zerocopy_send_client": false, 00:24:01.998 "zerocopy_threshold": 0, 00:24:01.998 "tls_version": 0, 00:24:01.998 "enable_ktls": false 00:24:01.998 } 00:24:01.998 }, 00:24:01.998 { 00:24:01.999 "method": "sock_impl_set_options", 00:24:01.999 "params": { 00:24:01.999 "impl_name": "posix", 00:24:01.999 "recv_buf_size": 2097152, 00:24:01.999 "send_buf_size": 2097152, 00:24:01.999 "enable_recv_pipe": true, 00:24:01.999 "enable_quickack": false, 00:24:01.999 "enable_placement_id": 0, 00:24:01.999 "enable_zerocopy_send_server": true, 00:24:01.999 "enable_zerocopy_send_client": false, 00:24:01.999 "zerocopy_threshold": 0, 00:24:01.999 "tls_version": 0, 00:24:01.999 "enable_ktls": false 00:24:01.999 } 00:24:01.999 } 00:24:01.999 ] 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "subsystem": "vmd", 00:24:01.999 "config": [] 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "subsystem": "accel", 00:24:01.999 "config": [ 00:24:01.999 { 00:24:01.999 "method": "accel_set_options", 00:24:01.999 "params": { 00:24:01.999 "small_cache_size": 128, 00:24:01.999 "large_cache_size": 16, 00:24:01.999 "task_count": 2048, 00:24:01.999 "sequence_count": 2048, 00:24:01.999 "buf_count": 2048 00:24:01.999 } 00:24:01.999 } 00:24:01.999 ] 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "subsystem": "bdev", 00:24:01.999 "config": [ 00:24:01.999 { 00:24:01.999 "method": "bdev_set_options", 00:24:01.999 "params": { 00:24:01.999 "bdev_io_pool_size": 65535, 00:24:01.999 "bdev_io_cache_size": 256, 00:24:01.999 "bdev_auto_examine": true, 00:24:01.999 "iobuf_small_cache_size": 128, 00:24:01.999 "iobuf_large_cache_size": 16 00:24:01.999 } 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "method": "bdev_raid_set_options", 00:24:01.999 "params": { 00:24:01.999 "process_window_size_kb": 1024, 00:24:01.999 "process_max_bandwidth_mb_sec": 0 00:24:01.999 } 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "method": "bdev_iscsi_set_options", 00:24:01.999 "params": { 00:24:01.999 "timeout_sec": 30 00:24:01.999 } 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "method": "bdev_nvme_set_options", 00:24:01.999 "params": { 00:24:01.999 "action_on_timeout": "none", 00:24:01.999 "timeout_us": 0, 00:24:01.999 "timeout_admin_us": 0, 00:24:01.999 "keep_alive_timeout_ms": 10000, 00:24:01.999 "arbitration_burst": 0, 00:24:01.999 "low_priority_weight": 0, 00:24:01.999 "medium_priority_weight": 0, 00:24:01.999 "high_priority_weight": 0, 00:24:01.999 "nvme_adminq_poll_period_us": 10000, 00:24:01.999 "nvme_ioq_poll_period_us": 0, 00:24:01.999 "io_queue_requests": 0, 00:24:01.999 "delay_cmd_submit": true, 00:24:01.999 "transport_retry_count": 4, 00:24:01.999 "bdev_retry_count": 3, 00:24:01.999 "transport_ack_timeout": 0, 00:24:01.999 "ctrlr_loss_timeout_sec": 0, 00:24:01.999 "reconnect_delay_sec": 0, 00:24:01.999 "fast_io_fail_timeout_sec": 0, 00:24:01.999 "disable_auto_failback": false, 00:24:01.999 "generate_uuids": false, 00:24:01.999 "transport_tos": 0, 00:24:01.999 "nvme_error_stat": false, 00:24:01.999 "rdma_srq_size": 0, 00:24:01.999 "io_path_stat": false, 00:24:01.999 "allow_accel_sequence": false, 00:24:01.999 "rdma_max_cq_size": 0, 00:24:01.999 "rdma_cm_event_timeout_ms": 0, 00:24:01.999 "dhchap_digests": [ 00:24:01.999 "sha256", 00:24:01.999 "sha384", 00:24:01.999 "sha512" 00:24:01.999 ], 00:24:01.999 "dhchap_dhgroups": [ 00:24:01.999 "null", 00:24:01.999 "ffdhe2048", 00:24:01.999 "ffdhe3072", 00:24:01.999 "ffdhe4096", 00:24:01.999 "ffdhe6144", 00:24:01.999 "ffdhe8192" 00:24:01.999 ] 00:24:01.999 } 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "method": "bdev_nvme_set_hotplug", 00:24:01.999 "params": { 00:24:01.999 "period_us": 100000, 00:24:01.999 "enable": false 00:24:01.999 } 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "method": "bdev_malloc_create", 00:24:01.999 "params": { 00:24:01.999 "name": "malloc0", 00:24:01.999 "num_blocks": 8192, 00:24:01.999 "block_size": 4096, 00:24:01.999 "physical_block_size": 4096, 00:24:01.999 "uuid": "a000b7bf-0b49-433c-9f5b-be4f52cfd023", 00:24:01.999 "optimal_io_boundary": 0, 00:24:01.999 "md_size": 0, 00:24:01.999 "dif_type": 0, 00:24:01.999 "dif_is_head_of_md": false, 00:24:01.999 "dif_pi_format": 0 00:24:01.999 } 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "method": "bdev_wait_for_examine" 00:24:01.999 } 00:24:01.999 ] 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "subsystem": "nbd", 00:24:01.999 "config": [] 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "subsystem": "scheduler", 00:24:01.999 "config": [ 00:24:01.999 { 00:24:01.999 "method": "framework_set_scheduler", 00:24:01.999 "params": { 00:24:01.999 "name": "static" 00:24:01.999 } 00:24:01.999 } 00:24:01.999 ] 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "subsystem": "nvmf", 00:24:01.999 "config": [ 00:24:01.999 { 00:24:01.999 "method": "nvmf_set_config", 00:24:01.999 "params": { 00:24:01.999 "discovery_filter": "match_any", 00:24:01.999 "admin_cmd_passthru": { 00:24:01.999 "identify_ctrlr": false 00:24:01.999 }, 00:24:01.999 "dhchap_digests": [ 00:24:01.999 "sha256", 00:24:01.999 "sha384", 00:24:01.999 "sha512" 00:24:01.999 ], 00:24:01.999 "dhchap_dhgroups": [ 00:24:01.999 "null", 00:24:01.999 "ffdhe2048", 00:24:01.999 "ffdhe3072", 00:24:01.999 "ffdhe4096", 00:24:01.999 "ffdhe6144", 00:24:01.999 "ffdhe8192" 00:24:01.999 ] 00:24:01.999 } 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "method": "nvmf_set_max_subsystems", 00:24:01.999 "params": { 00:24:01.999 "max_subsystems": 1024 00:24:01.999 } 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "method": "nvmf_set_crdt", 00:24:01.999 "params": { 00:24:01.999 "crdt1": 0, 00:24:01.999 "crdt2": 0, 00:24:01.999 "crdt3": 0 00:24:01.999 } 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "method": "nvmf_create_transport", 00:24:01.999 "params": { 00:24:01.999 "trtype": "TCP", 00:24:01.999 "max_queue_depth": 128, 00:24:01.999 "max_io_qpairs_per_ctrlr": 127, 00:24:01.999 "in_capsule_data_size": 4096, 00:24:01.999 "max_io_size": 131072, 00:24:01.999 "io_unit_size": 131072, 00:24:01.999 "max_aq_depth": 128, 00:24:01.999 "num_shared_buffers": 511, 00:24:01.999 "buf_cache_size": 4294967295, 00:24:01.999 "dif_insert_or_strip": false, 00:24:01.999 "zcopy": false, 00:24:01.999 "c2h_success": false, 00:24:01.999 "sock_priority": 0, 00:24:01.999 "abort_timeout_sec": 1, 00:24:01.999 "ack_timeout": 0, 00:24:01.999 "data_wr_pool_size": 0 00:24:01.999 } 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "method": "nvmf_create_subsystem", 00:24:01.999 "params": { 00:24:01.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.999 "allow_any_host": false, 00:24:01.999 "serial_number": "00000000000000000000", 00:24:01.999 "model_number": "SPDK bdev Controller", 00:24:01.999 "max_namespaces": 32, 00:24:01.999 "min_cntlid": 1, 00:24:01.999 "max_cntlid": 65519, 00:24:01.999 "ana_reporting": false 00:24:01.999 } 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "method": "nvmf_subsystem_add_host", 00:24:01.999 "params": { 00:24:01.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.999 "host": "nqn.2016-06.io.spdk:host1", 00:24:01.999 "psk": "key0" 00:24:01.999 } 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "method": "nvmf_subsystem_add_ns", 00:24:01.999 "params": { 00:24:01.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.999 "namespace": { 00:24:01.999 "nsid": 1, 00:24:01.999 "bdev_name": "malloc0", 00:24:01.999 "nguid": "A000B7BF0B49433C9F5BBE4F52CFD023", 00:24:01.999 "uuid": "a000b7bf-0b49-433c-9f5b-be4f52cfd023", 00:24:01.999 "no_auto_visible": false 00:24:01.999 } 00:24:01.999 } 00:24:01.999 }, 00:24:01.999 { 00:24:01.999 "method": "nvmf_subsystem_add_listener", 00:24:01.999 "params": { 00:24:01.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.999 "listen_address": { 00:24:01.999 "trtype": "TCP", 00:24:01.999 "adrfam": "IPv4", 00:24:01.999 "traddr": "10.0.0.2", 00:24:01.999 "trsvcid": "4420" 00:24:01.999 }, 00:24:01.999 "secure_channel": false, 00:24:01.999 "sock_impl": "ssl" 00:24:01.999 } 00:24:01.999 } 00:24:01.999 ] 00:24:01.999 } 00:24:01.999 ] 00:24:01.999 }' 00:24:01.999 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:01.999 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.999 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2359021 00:24:01.999 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:01.999 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2359021 00:24:01.999 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2359021 ']' 00:24:01.999 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.999 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:01.999 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.999 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:01.999 04:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.000 [2024-10-28 04:59:52.479525] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:24:02.000 [2024-10-28 04:59:52.479605] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.258 [2024-10-28 04:59:52.618150] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:02.258 [2024-10-28 04:59:52.653468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.258 [2024-10-28 04:59:52.698380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.258 [2024-10-28 04:59:52.698439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.258 [2024-10-28 04:59:52.698453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.258 [2024-10-28 04:59:52.698465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.258 [2024-10-28 04:59:52.698475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.258 [2024-10-28 04:59:52.699112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.518 [2024-10-28 04:59:52.936780] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.518 [2024-10-28 04:59:52.968721] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:02.518 [2024-10-28 04:59:52.969006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.085 04:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:03.085 04:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:03.085 04:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:03.085 04:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:03.085 04:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.085 04:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.085 04:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2359165 00:24:03.085 04:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2359165 /var/tmp/bdevperf.sock 00:24:03.085 04:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2359165 ']' 00:24:03.085 04:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.085 04:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:03.085 04:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:03.085 04:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.085 04:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:03.085 04:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.085 04:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:03.085 "subsystems": [ 00:24:03.085 { 00:24:03.085 "subsystem": "keyring", 00:24:03.085 "config": [ 00:24:03.085 { 00:24:03.085 "method": "keyring_file_add_key", 00:24:03.085 "params": { 00:24:03.085 "name": "key0", 00:24:03.085 "path": "/tmp/tmp.BVlPff9GKA" 00:24:03.085 } 00:24:03.085 } 00:24:03.085 ] 00:24:03.085 }, 00:24:03.085 { 00:24:03.085 "subsystem": "iobuf", 00:24:03.085 "config": [ 00:24:03.085 { 00:24:03.085 "method": "iobuf_set_options", 00:24:03.085 "params": { 00:24:03.085 "small_pool_count": 8192, 00:24:03.085 "large_pool_count": 1024, 00:24:03.085 "small_bufsize": 8192, 00:24:03.085 "large_bufsize": 135168, 00:24:03.085 "enable_numa": false 00:24:03.085 } 00:24:03.085 } 00:24:03.085 ] 00:24:03.085 }, 00:24:03.085 { 00:24:03.085 "subsystem": "sock", 00:24:03.085 "config": [ 00:24:03.085 { 00:24:03.085 "method": "sock_set_default_impl", 00:24:03.085 "params": { 00:24:03.085 "impl_name": "posix" 00:24:03.085 } 00:24:03.085 }, 00:24:03.085 { 00:24:03.085 "method": "sock_impl_set_options", 00:24:03.085 "params": { 00:24:03.085 "impl_name": "ssl", 00:24:03.085 "recv_buf_size": 4096, 00:24:03.085 "send_buf_size": 4096, 00:24:03.085 "enable_recv_pipe": true, 00:24:03.085 "enable_quickack": false, 00:24:03.085 "enable_placement_id": 0, 00:24:03.085 "enable_zerocopy_send_server": true, 00:24:03.085 "enable_zerocopy_send_client": false, 00:24:03.085 "zerocopy_threshold": 0, 00:24:03.085 "tls_version": 0, 00:24:03.085 "enable_ktls": false 00:24:03.085 } 00:24:03.085 }, 00:24:03.085 { 00:24:03.085 "method": "sock_impl_set_options", 00:24:03.085 "params": { 00:24:03.085 "impl_name": "posix", 00:24:03.085 "recv_buf_size": 2097152, 00:24:03.085 "send_buf_size": 2097152, 00:24:03.085 "enable_recv_pipe": true, 00:24:03.085 "enable_quickack": false, 00:24:03.085 "enable_placement_id": 0, 00:24:03.085 "enable_zerocopy_send_server": true, 00:24:03.085 "enable_zerocopy_send_client": false, 00:24:03.085 "zerocopy_threshold": 0, 00:24:03.085 "tls_version": 0, 00:24:03.085 "enable_ktls": false 00:24:03.085 } 00:24:03.085 } 00:24:03.085 ] 00:24:03.085 }, 00:24:03.085 { 00:24:03.085 "subsystem": "vmd", 00:24:03.085 "config": [] 00:24:03.085 }, 00:24:03.085 { 00:24:03.085 "subsystem": "accel", 00:24:03.085 "config": [ 00:24:03.085 { 00:24:03.085 "method": "accel_set_options", 00:24:03.085 "params": { 00:24:03.085 "small_cache_size": 128, 00:24:03.085 "large_cache_size": 16, 00:24:03.085 "task_count": 2048, 00:24:03.085 "sequence_count": 2048, 00:24:03.085 "buf_count": 2048 00:24:03.085 } 00:24:03.085 } 00:24:03.085 ] 00:24:03.085 }, 00:24:03.085 { 00:24:03.085 "subsystem": "bdev", 00:24:03.085 "config": [ 00:24:03.085 { 00:24:03.085 "method": "bdev_set_options", 00:24:03.085 "params": { 00:24:03.085 "bdev_io_pool_size": 65535, 00:24:03.085 "bdev_io_cache_size": 256, 00:24:03.085 "bdev_auto_examine": true, 00:24:03.085 "iobuf_small_cache_size": 128, 00:24:03.085 "iobuf_large_cache_size": 16 00:24:03.085 } 00:24:03.085 }, 00:24:03.085 { 00:24:03.085 "method": "bdev_raid_set_options", 00:24:03.085 "params": { 00:24:03.085 "process_window_size_kb": 1024, 00:24:03.085 "process_max_bandwidth_mb_sec": 0 00:24:03.085 } 00:24:03.085 }, 00:24:03.085 { 00:24:03.085 "method": "bdev_iscsi_set_options", 00:24:03.085 "params": { 00:24:03.085 "timeout_sec": 30 00:24:03.085 } 00:24:03.085 }, 00:24:03.085 { 00:24:03.085 "method": "bdev_nvme_set_options", 00:24:03.085 "params": { 00:24:03.085 "action_on_timeout": "none", 00:24:03.085 "timeout_us": 0, 00:24:03.085 "timeout_admin_us": 0, 00:24:03.085 "keep_alive_timeout_ms": 10000, 00:24:03.085 "arbitration_burst": 0, 00:24:03.085 "low_priority_weight": 0, 00:24:03.085 "medium_priority_weight": 0, 00:24:03.085 "high_priority_weight": 0, 00:24:03.085 "nvme_adminq_poll_period_us": 10000, 00:24:03.085 "nvme_ioq_poll_period_us": 0, 00:24:03.085 "io_queue_requests": 512, 00:24:03.085 "delay_cmd_submit": true, 00:24:03.085 "transport_retry_count": 4, 00:24:03.085 "bdev_retry_count": 3, 00:24:03.085 "transport_ack_timeout": 0, 00:24:03.085 "ctrlr_loss_timeout_sec": 0, 00:24:03.085 "reconnect_delay_sec": 0, 00:24:03.085 "fast_io_fail_timeout_sec": 0, 00:24:03.085 "disable_auto_failback": false, 00:24:03.085 "generate_uuids": false, 00:24:03.085 "transport_tos": 0, 00:24:03.085 "nvme_error_stat": false, 00:24:03.085 "rdma_srq_size": 0, 00:24:03.085 "io_path_stat": false, 00:24:03.085 "allow_accel_sequence": false, 00:24:03.085 "rdma_max_cq_size": 0, 00:24:03.085 "rdma_cm_event_timeout_ms": 0, 00:24:03.085 "dhchap_digests": [ 00:24:03.085 "sha256", 00:24:03.085 "sha384", 00:24:03.085 "sha512" 00:24:03.085 ], 00:24:03.085 "dhchap_dhgroups": [ 00:24:03.085 "null", 00:24:03.085 "ffdhe2048", 00:24:03.085 "ffdhe3072", 00:24:03.085 "ffdhe4096", 00:24:03.085 "ffdhe6144", 00:24:03.085 "ffdhe8192" 00:24:03.085 ] 00:24:03.085 } 00:24:03.085 }, 00:24:03.085 { 00:24:03.085 "method": "bdev_nvme_attach_controller", 00:24:03.085 "params": { 00:24:03.085 "name": "nvme0", 00:24:03.085 "trtype": "TCP", 00:24:03.085 "adrfam": "IPv4", 00:24:03.085 "traddr": "10.0.0.2", 00:24:03.085 "trsvcid": "4420", 00:24:03.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.085 "prchk_reftag": false, 00:24:03.085 "prchk_guard": false, 00:24:03.085 "ctrlr_loss_timeout_sec": 0, 00:24:03.085 "reconnect_delay_sec": 0, 00:24:03.085 "fast_io_fail_timeout_sec": 0, 00:24:03.085 "psk": "key0", 00:24:03.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.085 "hdgst": false, 00:24:03.085 "ddgst": false, 00:24:03.085 "multipath": "multipath" 00:24:03.085 } 00:24:03.085 }, 00:24:03.085 { 00:24:03.085 "method": "bdev_nvme_set_hotplug", 00:24:03.085 "params": { 00:24:03.085 "period_us": 100000, 00:24:03.085 "enable": false 00:24:03.085 } 00:24:03.085 }, 00:24:03.085 { 00:24:03.085 "method": "bdev_enable_histogram", 00:24:03.085 "params": { 00:24:03.085 "name": "nvme0n1", 00:24:03.085 "enable": true 00:24:03.085 } 00:24:03.085 }, 00:24:03.085 { 00:24:03.085 "method": "bdev_wait_for_examine" 00:24:03.085 } 00:24:03.085 ] 00:24:03.085 }, 00:24:03.085 { 00:24:03.085 "subsystem": "nbd", 00:24:03.085 "config": [] 00:24:03.085 } 00:24:03.085 ] 00:24:03.085 }' 00:24:03.085 [2024-10-28 04:59:53.652586] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:24:03.085 [2024-10-28 04:59:53.652684] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359165 ] 00:24:03.343 [2024-10-28 04:59:53.784902] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:03.343 [2024-10-28 04:59:53.824479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.343 [2024-10-28 04:59:53.874506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.619 [2024-10-28 04:59:54.055975] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:04.186 04:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:04.186 04:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:04.186 04:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:04.186 04:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:04.443 04:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.443 04:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:04.703 Running I/O for 1 seconds... 00:24:05.645 3394.00 IOPS, 13.26 MiB/s 00:24:05.645 Latency(us) 00:24:05.645 [2024-10-28T03:59:56.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.645 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:05.645 Verification LBA range: start 0x0 length 0x2000 00:24:05.645 nvme0n1 : 1.03 3430.26 13.40 0.00 0.00 36863.73 10024.44 44574.70 00:24:05.645 [2024-10-28T03:59:56.241Z] =================================================================================================================== 00:24:05.645 [2024-10-28T03:59:56.241Z] Total : 3430.26 13.40 0.00 0.00 36863.73 10024.44 44574.70 00:24:05.645 { 00:24:05.645 "results": [ 00:24:05.645 { 00:24:05.645 "job": "nvme0n1", 00:24:05.645 "core_mask": "0x2", 00:24:05.645 "workload": "verify", 00:24:05.645 "status": "finished", 00:24:05.645 "verify_range": { 00:24:05.645 "start": 0, 00:24:05.645 "length": 8192 00:24:05.645 }, 00:24:05.645 "queue_depth": 128, 00:24:05.645 "io_size": 4096, 00:24:05.645 "runtime": 1.027036, 00:24:05.645 "iops": 3430.2595040485435, 00:24:05.645 "mibps": 13.399451187689623, 00:24:05.645 "io_failed": 0, 00:24:05.645 "io_timeout": 0, 00:24:05.645 "avg_latency_us": 36863.72870886225, 00:24:05.645 "min_latency_us": 10024.441061815482, 00:24:05.645 "max_latency_us": 44574.69909040282 00:24:05.645 } 00:24:05.645 ], 00:24:05.645 "core_count": 1 00:24:05.645 } 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:05.645 nvmf_trace.0 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2359165 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2359165 ']' 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2359165 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2359165 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2359165' 00:24:05.645 killing process with pid 2359165 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2359165 00:24:05.645 Received shutdown signal, test time was about 1.000000 seconds 00:24:05.645 00:24:05.645 Latency(us) 00:24:05.645 [2024-10-28T03:59:56.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.645 [2024-10-28T03:59:56.241Z] =================================================================================================================== 00:24:05.645 [2024-10-28T03:59:56.241Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.645 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2359165 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:05.904 rmmod nvme_tcp 00:24:05.904 rmmod nvme_fabrics 00:24:05.904 rmmod nvme_keyring 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 2359021 ']' 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 2359021 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2359021 ']' 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2359021 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.904 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2359021 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2359021' 00:24:06.219 killing process with pid 2359021 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2359021 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2359021 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.219 04:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.iAGZa45fPX /tmp/tmp.yYDfHHcAgg /tmp/tmp.BVlPff9GKA 00:24:08.784 00:24:08.784 real 1m32.203s 00:24:08.784 user 2m32.018s 00:24:08.784 sys 0m26.967s 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.784 ************************************ 00:24:08.784 END TEST nvmf_tls 00:24:08.784 ************************************ 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:08.784 ************************************ 00:24:08.784 START TEST nvmf_fips 00:24:08.784 ************************************ 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:08.784 * Looking for test storage... 00:24:08.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1689 -- # lcov --version 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:08.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.784 --rc genhtml_branch_coverage=1 00:24:08.784 --rc genhtml_function_coverage=1 00:24:08.784 --rc genhtml_legend=1 00:24:08.784 --rc geninfo_all_blocks=1 00:24:08.784 --rc geninfo_unexecuted_blocks=1 00:24:08.784 00:24:08.784 ' 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:08.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.784 --rc genhtml_branch_coverage=1 00:24:08.784 --rc genhtml_function_coverage=1 00:24:08.784 --rc genhtml_legend=1 00:24:08.784 --rc geninfo_all_blocks=1 00:24:08.784 --rc geninfo_unexecuted_blocks=1 00:24:08.784 00:24:08.784 ' 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:08.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.784 --rc genhtml_branch_coverage=1 00:24:08.784 --rc genhtml_function_coverage=1 00:24:08.784 --rc genhtml_legend=1 00:24:08.784 --rc geninfo_all_blocks=1 00:24:08.784 --rc geninfo_unexecuted_blocks=1 00:24:08.784 00:24:08.784 ' 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:08.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.784 --rc genhtml_branch_coverage=1 00:24:08.784 --rc genhtml_function_coverage=1 00:24:08.784 --rc genhtml_legend=1 00:24:08.784 --rc geninfo_all_blocks=1 00:24:08.784 --rc geninfo_unexecuted_blocks=1 00:24:08.784 00:24:08.784 ' 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.784 04:59:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.784 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:08.784 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:08.784 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.784 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.784 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:08.785 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:08.785 Error setting digest 00:24:08.785 4052D38E1C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:08.785 4052D38E1C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:08.786 04:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:10.695 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:10.695 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:10.695 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:10.695 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:10.696 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:10.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:24:10.696 00:24:10.696 --- 10.0.0.2 ping statistics --- 00:24:10.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.696 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:24:10.696 00:24:10.696 --- 10.0.0.1 ping statistics --- 00:24:10.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.696 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=2361574 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 2361574 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2361574 ']' 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:10.696 05:00:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:10.953 [2024-10-28 05:00:01.362860] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:24:10.953 [2024-10-28 05:00:01.362962] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.953 [2024-10-28 05:00:01.501610] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:10.953 [2024-10-28 05:00:01.541762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.212 [2024-10-28 05:00:01.591304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.212 [2024-10-28 05:00:01.591367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.212 [2024-10-28 05:00:01.591381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.212 [2024-10-28 05:00:01.591393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.212 [2024-10-28 05:00:01.591410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.212 [2024-10-28 05:00:01.592023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.778 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:11.778 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:11.778 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:11.778 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:11.778 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:11.778 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.778 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:11.778 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:11.778 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:11.778 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.upG 00:24:11.778 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:11.778 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.upG 00:24:11.778 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.upG 00:24:11.778 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.upG 00:24:11.778 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:12.036 [2024-10-28 05:00:02.619763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.295 [2024-10-28 05:00:02.635708] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:12.295 [2024-10-28 05:00:02.635953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.295 malloc0 00:24:12.295 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.295 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2361780 00:24:12.295 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:12.295 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2361780 /var/tmp/bdevperf.sock 00:24:12.295 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2361780 ']' 00:24:12.295 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.295 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:12.295 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.295 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:12.295 05:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.295 [2024-10-28 05:00:02.769422] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:24:12.295 [2024-10-28 05:00:02.769525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361780 ] 00:24:12.553 [2024-10-28 05:00:02.901054] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:12.553 [2024-10-28 05:00:02.937496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.553 [2024-10-28 05:00:02.983680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.485 05:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:13.485 05:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:13.485 05:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.upG 00:24:13.485 05:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:13.743 [2024-10-28 05:00:04.221962] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:13.743 TLSTESTn1 00:24:13.743 05:00:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:14.000 Running I/O for 10 seconds... 00:24:15.867 3269.00 IOPS, 12.77 MiB/s [2024-10-28T04:00:07.837Z] 2990.00 IOPS, 11.68 MiB/s [2024-10-28T04:00:08.769Z] 2908.67 IOPS, 11.36 MiB/s [2024-10-28T04:00:09.703Z] 2861.25 IOPS, 11.18 MiB/s [2024-10-28T04:00:10.636Z] 2840.40 IOPS, 11.10 MiB/s [2024-10-28T04:00:11.569Z] 2816.67 IOPS, 11.00 MiB/s [2024-10-28T04:00:12.501Z] 2796.14 IOPS, 10.92 MiB/s [2024-10-28T04:00:13.435Z] 2698.00 IOPS, 10.54 MiB/s [2024-10-28T04:00:14.808Z] 2642.22 IOPS, 10.32 MiB/s [2024-10-28T04:00:14.808Z] 2602.30 IOPS, 10.17 MiB/s 00:24:24.212 Latency(us) 00:24:24.212 [2024-10-28T04:00:14.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.212 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:24.212 Verification LBA range: start 0x0 length 0x2000 00:24:24.212 TLSTESTn1 : 10.05 2602.95 10.17 0.00 0.00 49065.34 7785.97 53723.22 00:24:24.212 [2024-10-28T04:00:14.809Z] =================================================================================================================== 00:24:24.213 [2024-10-28T04:00:14.809Z] Total : 2602.95 10.17 0.00 0.00 49065.34 7785.97 53723.22 00:24:24.213 { 00:24:24.213 "results": [ 00:24:24.213 { 00:24:24.213 "job": "TLSTESTn1", 00:24:24.213 "core_mask": "0x4", 00:24:24.213 "workload": "verify", 00:24:24.213 "status": "finished", 00:24:24.213 "verify_range": { 00:24:24.213 "start": 0, 00:24:24.213 "length": 8192 00:24:24.213 }, 00:24:24.213 "queue_depth": 128, 00:24:24.213 "io_size": 4096, 00:24:24.213 "runtime": 10.046277, 00:24:24.213 "iops": 2602.9543083472613, 00:24:24.213 "mibps": 10.16779026698149, 00:24:24.213 "io_failed": 0, 00:24:24.213 "io_timeout": 0, 00:24:24.213 "avg_latency_us": 49065.34219712423, 00:24:24.213 "min_latency_us": 7785.973640245034, 00:24:24.213 "max_latency_us": 53723.21811769074 00:24:24.213 } 00:24:24.213 ], 00:24:24.213 "core_count": 1 00:24:24.213 } 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:24.213 nvmf_trace.0 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2361780 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2361780 ']' 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2361780 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2361780 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2361780' 00:24:24.213 killing process with pid 2361780 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2361780 00:24:24.213 Received shutdown signal, test time was about 10.000000 seconds 00:24:24.213 00:24:24.213 Latency(us) 00:24:24.213 [2024-10-28T04:00:14.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.213 [2024-10-28T04:00:14.809Z] =================================================================================================================== 00:24:24.213 [2024-10-28T04:00:14.809Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2361780 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:24.213 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:24.213 rmmod nvme_tcp 00:24:24.213 rmmod nvme_fabrics 00:24:24.470 rmmod nvme_keyring 00:24:24.470 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:24.470 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:24.470 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:24.470 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 2361574 ']' 00:24:24.470 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 2361574 00:24:24.470 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2361574 ']' 00:24:24.470 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2361574 00:24:24.470 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:24.470 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:24.470 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2361574 00:24:24.470 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:24.470 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:24.470 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2361574' 00:24:24.470 killing process with pid 2361574 00:24:24.470 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2361574 00:24:24.470 05:00:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2361574 00:24:24.470 05:00:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:24.470 05:00:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:24.470 05:00:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:24.470 05:00:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:24.727 05:00:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:24:24.727 05:00:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:24.727 05:00:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:24:24.727 05:00:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:24.727 05:00:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:24.727 05:00:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.727 05:00:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.727 05:00:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.628 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:26.628 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.upG 00:24:26.628 00:24:26.628 real 0m18.277s 00:24:26.628 user 0m23.502s 00:24:26.628 sys 0m6.456s 00:24:26.628 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:26.628 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:26.628 ************************************ 00:24:26.628 END TEST nvmf_fips 00:24:26.628 ************************************ 00:24:26.628 05:00:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:26.628 05:00:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:26.628 05:00:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:26.628 05:00:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:26.628 ************************************ 00:24:26.628 START TEST nvmf_control_msg_list 00:24:26.628 ************************************ 00:24:26.628 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:26.628 * Looking for test storage... 00:24:26.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:26.628 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:26.628 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1689 -- # lcov --version 00:24:26.628 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:26.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.887 --rc genhtml_branch_coverage=1 00:24:26.887 --rc genhtml_function_coverage=1 00:24:26.887 --rc genhtml_legend=1 00:24:26.887 --rc geninfo_all_blocks=1 00:24:26.887 --rc geninfo_unexecuted_blocks=1 00:24:26.887 00:24:26.887 ' 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:26.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.887 --rc genhtml_branch_coverage=1 00:24:26.887 --rc genhtml_function_coverage=1 00:24:26.887 --rc genhtml_legend=1 00:24:26.887 --rc geninfo_all_blocks=1 00:24:26.887 --rc geninfo_unexecuted_blocks=1 00:24:26.887 00:24:26.887 ' 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:26.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.887 --rc genhtml_branch_coverage=1 00:24:26.887 --rc genhtml_function_coverage=1 00:24:26.887 --rc genhtml_legend=1 00:24:26.887 --rc geninfo_all_blocks=1 00:24:26.887 --rc geninfo_unexecuted_blocks=1 00:24:26.887 00:24:26.887 ' 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:26.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.887 --rc genhtml_branch_coverage=1 00:24:26.887 --rc genhtml_function_coverage=1 00:24:26.887 --rc genhtml_legend=1 00:24:26.887 --rc geninfo_all_blocks=1 00:24:26.887 --rc geninfo_unexecuted_blocks=1 00:24:26.887 00:24:26.887 ' 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.887 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:26.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:26.888 05:00:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.792 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:28.793 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:28.793 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:28.793 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:28.793 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:28.793 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:29.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:24:29.053 00:24:29.053 --- 10.0.0.2 ping statistics --- 00:24:29.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.053 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:29.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:24:29.053 00:24:29.053 --- 10.0.0.1 ping statistics --- 00:24:29.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.053 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=2365611 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 2365611 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 2365611 ']' 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:29.053 05:00:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.053 [2024-10-28 05:00:19.509441] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:24:29.053 [2024-10-28 05:00:19.509530] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.311 [2024-10-28 05:00:19.648560] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:29.311 [2024-10-28 05:00:19.691037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.311 [2024-10-28 05:00:19.738702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.311 [2024-10-28 05:00:19.738760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.311 [2024-10-28 05:00:19.738775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.311 [2024-10-28 05:00:19.738787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.311 [2024-10-28 05:00:19.738798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.311 [2024-10-28 05:00:19.739446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:30.246 [2024-10-28 05:00:20.581157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:30.246 Malloc0 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:30.246 [2024-10-28 05:00:20.621807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2365765 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2365766 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2365767 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:30.246 05:00:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2365765 00:24:30.246 [2024-10-28 05:00:20.790435] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:30.246 [2024-10-28 05:00:20.790780] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:30.246 [2024-10-28 05:00:20.800156] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:31.621 Initializing NVMe Controllers 00:24:31.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:31.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:31.621 Initialization complete. Launching workers. 00:24:31.621 ======================================================== 00:24:31.621 Latency(us) 00:24:31.621 Device Information : IOPS MiB/s Average min max 00:24:31.621 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40988.18 40792.78 41036.69 00:24:31.621 ======================================================== 00:24:31.621 Total : 25.00 0.10 40988.18 40792.78 41036.69 00:24:31.621 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2365766 00:24:31.621 Initializing NVMe Controllers 00:24:31.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:31.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:31.621 Initialization complete. Launching workers. 00:24:31.621 ======================================================== 00:24:31.621 Latency(us) 00:24:31.621 Device Information : IOPS MiB/s Average min max 00:24:31.621 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3881.00 15.16 257.28 202.54 515.57 00:24:31.621 ======================================================== 00:24:31.621 Total : 3881.00 15.16 257.28 202.54 515.57 00:24:31.621 00:24:31.621 Initializing NVMe Controllers 00:24:31.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:31.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:31.621 Initialization complete. Launching workers. 00:24:31.621 ======================================================== 00:24:31.621 Latency(us) 00:24:31.621 Device Information : IOPS MiB/s Average min max 00:24:31.621 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40982.74 40560.05 41038.44 00:24:31.621 ======================================================== 00:24:31.621 Total : 25.00 0.10 40982.74 40560.05 41038.44 00:24:31.621 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2365767 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:31.621 rmmod nvme_tcp 00:24:31.621 rmmod nvme_fabrics 00:24:31.621 rmmod nvme_keyring 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 2365611 ']' 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 2365611 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 2365611 ']' 00:24:31.621 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 2365611 00:24:31.622 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:24:31.622 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:31.622 05:00:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2365611 00:24:31.622 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:31.622 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:31.622 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2365611' 00:24:31.622 killing process with pid 2365611 00:24:31.622 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 2365611 00:24:31.622 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 2365611 00:24:31.881 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:31.881 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:31.881 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:31.881 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:31.881 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:24:31.881 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:31.881 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:24:31.881 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:31.881 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:31.881 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.881 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.881 05:00:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.788 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:33.788 00:24:33.788 real 0m7.142s 00:24:33.788 user 0m6.545s 00:24:33.788 sys 0m2.594s 00:24:33.788 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:33.788 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:33.788 ************************************ 00:24:33.788 END TEST nvmf_control_msg_list 00:24:33.788 ************************************ 00:24:33.788 05:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:33.788 05:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:33.788 05:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:33.788 05:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:33.788 ************************************ 00:24:33.788 START TEST nvmf_wait_for_buf 00:24:33.788 ************************************ 00:24:33.788 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:34.047 * Looking for test storage... 00:24:34.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:34.047 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:34.047 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1689 -- # lcov --version 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:34.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.048 --rc genhtml_branch_coverage=1 00:24:34.048 --rc genhtml_function_coverage=1 00:24:34.048 --rc genhtml_legend=1 00:24:34.048 --rc geninfo_all_blocks=1 00:24:34.048 --rc geninfo_unexecuted_blocks=1 00:24:34.048 00:24:34.048 ' 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:34.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.048 --rc genhtml_branch_coverage=1 00:24:34.048 --rc genhtml_function_coverage=1 00:24:34.048 --rc genhtml_legend=1 00:24:34.048 --rc geninfo_all_blocks=1 00:24:34.048 --rc geninfo_unexecuted_blocks=1 00:24:34.048 00:24:34.048 ' 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:34.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.048 --rc genhtml_branch_coverage=1 00:24:34.048 --rc genhtml_function_coverage=1 00:24:34.048 --rc genhtml_legend=1 00:24:34.048 --rc geninfo_all_blocks=1 00:24:34.048 --rc geninfo_unexecuted_blocks=1 00:24:34.048 00:24:34.048 ' 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:34.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.048 --rc genhtml_branch_coverage=1 00:24:34.048 --rc genhtml_function_coverage=1 00:24:34.048 --rc genhtml_legend=1 00:24:34.048 --rc geninfo_all_blocks=1 00:24:34.048 --rc geninfo_unexecuted_blocks=1 00:24:34.048 00:24:34.048 ' 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.048 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:34.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:34.049 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:34.049 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:34.049 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:34.049 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:34.049 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:34.049 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.049 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:34.049 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:34.049 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:34.049 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.049 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.049 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.049 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:34.049 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:34.049 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:34.049 05:00:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:35.951 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:35.951 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:35.951 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:35.952 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:35.952 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.952 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:36.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:24:36.211 00:24:36.211 --- 10.0.0.2 ping statistics --- 00:24:36.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.211 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:36.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:24:36.211 00:24:36.211 --- 10.0.0.1 ping statistics --- 00:24:36.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.211 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=2367817 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 2367817 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 2367817 ']' 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:36.211 05:00:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:36.211 [2024-10-28 05:00:26.693141] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:24:36.212 [2024-10-28 05:00:26.693212] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.470 [2024-10-28 05:00:26.830629] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:36.470 [2024-10-28 05:00:26.872282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.470 [2024-10-28 05:00:26.919368] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.470 [2024-10-28 05:00:26.919437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.470 [2024-10-28 05:00:26.919453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.470 [2024-10-28 05:00:26.919468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.470 [2024-10-28 05:00:26.919480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.470 [2024-10-28 05:00:26.920167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:37.404 Malloc0 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:37.404 [2024-10-28 05:00:27.817955] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:37.404 [2024-10-28 05:00:27.842125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.404 05:00:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:37.662 [2024-10-28 05:00:28.035779] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:39.033 Initializing NVMe Controllers 00:24:39.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:39.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:39.033 Initialization complete. Launching workers. 00:24:39.033 ======================================================== 00:24:39.033 Latency(us) 00:24:39.033 Device Information : IOPS MiB/s Average min max 00:24:39.033 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 125.00 15.62 33372.79 24059.59 64002.03 00:24:39.033 ======================================================== 00:24:39.033 Total : 125.00 15.62 33372.79 24059.59 64002.03 00:24:39.033 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1974 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1974 -eq 0 ]] 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:39.033 rmmod nvme_tcp 00:24:39.033 rmmod nvme_fabrics 00:24:39.033 rmmod nvme_keyring 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 2367817 ']' 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 2367817 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 2367817 ']' 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 2367817 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2367817 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2367817' 00:24:39.033 killing process with pid 2367817 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 2367817 00:24:39.033 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 2367817 00:24:39.295 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:39.295 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:39.295 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:39.295 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:39.295 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:24:39.295 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:39.295 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:24:39.295 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:39.295 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:39.295 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.295 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.295 05:00:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.198 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:41.198 00:24:41.198 real 0m7.428s 00:24:41.198 user 0m4.065s 00:24:41.198 sys 0m1.966s 00:24:41.198 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:41.198 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:41.198 ************************************ 00:24:41.198 END TEST nvmf_wait_for_buf 00:24:41.198 ************************************ 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:41.472 ************************************ 00:24:41.472 START TEST nvmf_fuzz 00:24:41.472 ************************************ 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:41.472 * Looking for test storage... 00:24:41.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1689 -- # lcov --version 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.472 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:41.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.473 --rc genhtml_branch_coverage=1 00:24:41.473 --rc genhtml_function_coverage=1 00:24:41.473 --rc genhtml_legend=1 00:24:41.473 --rc geninfo_all_blocks=1 00:24:41.473 --rc geninfo_unexecuted_blocks=1 00:24:41.473 00:24:41.473 ' 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:41.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.473 --rc genhtml_branch_coverage=1 00:24:41.473 --rc genhtml_function_coverage=1 00:24:41.473 --rc genhtml_legend=1 00:24:41.473 --rc geninfo_all_blocks=1 00:24:41.473 --rc geninfo_unexecuted_blocks=1 00:24:41.473 00:24:41.473 ' 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:41.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.473 --rc genhtml_branch_coverage=1 00:24:41.473 --rc genhtml_function_coverage=1 00:24:41.473 --rc genhtml_legend=1 00:24:41.473 --rc geninfo_all_blocks=1 00:24:41.473 --rc geninfo_unexecuted_blocks=1 00:24:41.473 00:24:41.473 ' 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:41.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.473 --rc genhtml_branch_coverage=1 00:24:41.473 --rc genhtml_function_coverage=1 00:24:41.473 --rc genhtml_legend=1 00:24:41.473 --rc geninfo_all_blocks=1 00:24:41.473 --rc geninfo_unexecuted_blocks=1 00:24:41.473 00:24:41.473 ' 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:41.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:41.473 05:00:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:43.487 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:43.488 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:43.488 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:43.488 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:43.488 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # is_hw=yes 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.488 05:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.488 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.488 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.488 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:43.488 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.488 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.488 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.488 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:43.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:24:43.747 00:24:43.747 --- 10.0.0.2 ping statistics --- 00:24:43.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.747 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:24:43.747 00:24:43.747 --- 10.0.0.1 ping statistics --- 00:24:43.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.747 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # return 0 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2370036 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2370036 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2370036 ']' 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:43.747 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.005 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.006 Malloc0 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:44.006 05:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:16.074 Fuzzing completed. Shutting down the fuzz application 00:25:16.074 00:25:16.074 Dumping successful admin opcodes: 00:25:16.074 8, 9, 10, 24, 00:25:16.074 Dumping successful io opcodes: 00:25:16.074 0, 9, 00:25:16.074 NS: 0x2000008eff00 I/O qp, Total commands completed: 465091, total successful commands: 2690, random_seed: 2635654400 00:25:16.074 NS: 0x2000008eff00 admin qp, Total commands completed: 56064, total successful commands: 445, random_seed: 2190927936 00:25:16.075 05:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:16.075 Fuzzing completed. Shutting down the fuzz application 00:25:16.075 00:25:16.075 Dumping successful admin opcodes: 00:25:16.075 24, 00:25:16.075 Dumping successful io opcodes: 00:25:16.075 00:25:16.075 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3243083260 00:25:16.075 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3243194467 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:16.075 rmmod nvme_tcp 00:25:16.075 rmmod nvme_fabrics 00:25:16.075 rmmod nvme_keyring 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@515 -- # '[' -n 2370036 ']' 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # killprocess 2370036 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2370036 ']' 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 2370036 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2370036 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2370036' 00:25:16.075 killing process with pid 2370036 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 2370036 00:25:16.075 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 2370036 00:25:16.335 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:16.335 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:16.335 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:16.335 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:16.335 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-save 00:25:16.335 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:16.335 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-restore 00:25:16.335 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:16.335 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:16.335 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.335 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.335 05:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.239 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:18.239 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:18.239 00:25:18.239 real 0m36.950s 00:25:18.239 user 0m50.964s 00:25:18.239 sys 0m14.580s 00:25:18.239 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:18.239 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:18.239 ************************************ 00:25:18.239 END TEST nvmf_fuzz 00:25:18.239 ************************************ 00:25:18.239 05:01:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:18.239 05:01:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:18.239 05:01:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:18.239 05:01:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:18.239 ************************************ 00:25:18.239 START TEST nvmf_multiconnection 00:25:18.239 ************************************ 00:25:18.239 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:18.498 * Looking for test storage... 00:25:18.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1689 -- # lcov --version 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:25:18.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.498 --rc genhtml_branch_coverage=1 00:25:18.498 --rc genhtml_function_coverage=1 00:25:18.498 --rc genhtml_legend=1 00:25:18.498 --rc geninfo_all_blocks=1 00:25:18.498 --rc geninfo_unexecuted_blocks=1 00:25:18.498 00:25:18.498 ' 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:25:18.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.498 --rc genhtml_branch_coverage=1 00:25:18.498 --rc genhtml_function_coverage=1 00:25:18.498 --rc genhtml_legend=1 00:25:18.498 --rc geninfo_all_blocks=1 00:25:18.498 --rc geninfo_unexecuted_blocks=1 00:25:18.498 00:25:18.498 ' 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:25:18.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.498 --rc genhtml_branch_coverage=1 00:25:18.498 --rc genhtml_function_coverage=1 00:25:18.498 --rc genhtml_legend=1 00:25:18.498 --rc geninfo_all_blocks=1 00:25:18.498 --rc geninfo_unexecuted_blocks=1 00:25:18.498 00:25:18.498 ' 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:25:18.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.498 --rc genhtml_branch_coverage=1 00:25:18.498 --rc genhtml_function_coverage=1 00:25:18.498 --rc genhtml_legend=1 00:25:18.498 --rc geninfo_all_blocks=1 00:25:18.498 --rc geninfo_unexecuted_blocks=1 00:25:18.498 00:25:18.498 ' 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:18.498 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:18.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:18.499 05:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:18.499 05:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:18.499 05:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:18.499 05:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:18.499 05:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:18.499 05:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:18.499 05:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.499 05:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:18.499 05:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:18.499 05:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:18.499 05:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.499 05:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.499 05:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.499 05:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:18.499 05:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:18.499 05:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:18.499 05:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:21.030 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:21.030 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:21.030 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:21.031 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:21.031 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # is_hw=yes 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:21.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:25:21.031 00:25:21.031 --- 10.0.0.2 ping statistics --- 00:25:21.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.031 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:25:21.031 00:25:21.031 --- 10.0.0.1 ping statistics --- 00:25:21.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.031 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # return 0 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # nvmfpid=2375645 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # waitforlisten 2375645 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 2375645 ']' 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:21.031 05:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.031 [2024-10-28 05:01:11.243193] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:25:21.031 [2024-10-28 05:01:11.243294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.031 [2024-10-28 05:01:11.382776] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:21.031 [2024-10-28 05:01:11.425301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.031 [2024-10-28 05:01:11.476406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.031 [2024-10-28 05:01:11.476473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.031 [2024-10-28 05:01:11.476489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.031 [2024-10-28 05:01:11.476503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.031 [2024-10-28 05:01:11.476516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.031 [2024-10-28 05:01:11.478210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.031 [2024-10-28 05:01:11.478265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.031 [2024-10-28 05:01:11.478380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.031 [2024-10-28 05:01:11.478383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 [2024-10-28 05:01:12.256644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 Malloc1 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 [2024-10-28 05:01:12.327534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 Malloc2 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 Malloc3 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 Malloc4 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.966 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.966 Malloc5 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.967 Malloc6 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.967 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.225 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.225 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 Malloc7 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 Malloc8 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 Malloc9 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 Malloc10 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 Malloc11 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.226 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.484 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.484 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:22.484 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.485 05:01:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:23.050 05:01:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:23.050 05:01:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:23.050 05:01:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.050 05:01:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:23.050 05:01:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:24.949 05:01:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:24.949 05:01:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:24.949 05:01:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:24.949 05:01:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:24.949 05:01:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:24.949 05:01:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:24.949 05:01:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.949 05:01:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:25.515 05:01:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:25.515 05:01:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:25.515 05:01:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:25.515 05:01:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:25.515 05:01:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:28.044 05:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:28.044 05:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:28.044 05:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:28.044 05:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:28.044 05:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.044 05:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:28.044 05:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.044 05:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:28.301 05:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:28.301 05:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:28.301 05:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:28.301 05:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:28.301 05:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:30.199 05:01:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:30.199 05:01:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:30.199 05:01:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:30.457 05:01:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:30.457 05:01:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:30.457 05:01:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:30.457 05:01:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.457 05:01:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:31.023 05:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:31.023 05:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:31.023 05:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:31.023 05:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:31.023 05:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:32.921 05:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:32.921 05:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:32.921 05:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:32.921 05:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:32.922 05:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:32.922 05:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:32.922 05:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.922 05:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:33.856 05:01:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:33.856 05:01:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:33.856 05:01:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:33.856 05:01:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:33.856 05:01:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:35.754 05:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:35.754 05:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:35.754 05:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:35.754 05:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:35.754 05:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:35.754 05:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:35.754 05:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.754 05:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:36.687 05:01:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:36.688 05:01:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:36.688 05:01:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:36.688 05:01:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:36.688 05:01:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:38.583 05:01:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:38.583 05:01:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:38.583 05:01:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:38.583 05:01:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:38.583 05:01:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:38.583 05:01:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:38.583 05:01:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.583 05:01:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:39.516 05:01:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:39.516 05:01:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:39.516 05:01:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:39.516 05:01:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:39.516 05:01:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:41.415 05:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:41.415 05:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:41.415 05:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:41.415 05:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:41.415 05:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:41.415 05:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:41.415 05:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.415 05:01:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:42.349 05:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:42.349 05:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:42.349 05:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:42.349 05:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:42.349 05:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:44.247 05:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:44.247 05:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:44.247 05:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:44.247 05:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:44.247 05:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.247 05:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:44.247 05:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.247 05:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:45.201 05:01:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:45.201 05:01:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:45.201 05:01:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:45.201 05:01:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:45.201 05:01:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:47.203 05:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:47.203 05:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:47.203 05:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:47.203 05:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:47.203 05:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:47.203 05:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:47.203 05:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.203 05:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:48.138 05:01:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:48.138 05:01:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:48.138 05:01:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:48.138 05:01:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:48.138 05:01:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:50.036 05:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:50.036 05:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:50.036 05:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:50.036 05:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:50.036 05:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:50.036 05:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:50.036 05:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.036 05:01:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:50.970 05:01:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:50.970 05:01:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:50.970 05:01:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:50.970 05:01:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:50.970 05:01:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:52.864 05:01:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:52.864 05:01:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:52.864 05:01:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:52.864 05:01:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:52.864 05:01:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:52.864 05:01:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:52.864 05:01:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:52.864 [global] 00:25:52.864 thread=1 00:25:52.864 invalidate=1 00:25:52.864 rw=read 00:25:52.864 time_based=1 00:25:52.864 runtime=10 00:25:52.864 ioengine=libaio 00:25:52.864 direct=1 00:25:52.864 bs=262144 00:25:52.864 iodepth=64 00:25:52.864 norandommap=1 00:25:52.864 numjobs=1 00:25:52.864 00:25:52.864 [job0] 00:25:52.864 filename=/dev/nvme0n1 00:25:52.864 [job1] 00:25:52.864 filename=/dev/nvme10n1 00:25:52.864 [job2] 00:25:52.864 filename=/dev/nvme1n1 00:25:52.864 [job3] 00:25:52.864 filename=/dev/nvme2n1 00:25:52.864 [job4] 00:25:52.864 filename=/dev/nvme3n1 00:25:52.864 [job5] 00:25:52.864 filename=/dev/nvme4n1 00:25:52.864 [job6] 00:25:52.864 filename=/dev/nvme5n1 00:25:52.864 [job7] 00:25:52.864 filename=/dev/nvme6n1 00:25:52.864 [job8] 00:25:52.864 filename=/dev/nvme7n1 00:25:52.864 [job9] 00:25:52.864 filename=/dev/nvme8n1 00:25:52.864 [job10] 00:25:52.864 filename=/dev/nvme9n1 00:25:53.120 Could not set queue depth (nvme0n1) 00:25:53.120 Could not set queue depth (nvme10n1) 00:25:53.120 Could not set queue depth (nvme1n1) 00:25:53.120 Could not set queue depth (nvme2n1) 00:25:53.120 Could not set queue depth (nvme3n1) 00:25:53.120 Could not set queue depth (nvme4n1) 00:25:53.120 Could not set queue depth (nvme5n1) 00:25:53.120 Could not set queue depth (nvme6n1) 00:25:53.120 Could not set queue depth (nvme7n1) 00:25:53.120 Could not set queue depth (nvme8n1) 00:25:53.120 Could not set queue depth (nvme9n1) 00:25:53.120 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.120 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.120 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.120 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.120 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.120 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.120 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.120 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.120 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.120 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.120 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.120 fio-3.35 00:25:53.120 Starting 11 threads 00:26:05.364 00:26:05.364 job0: (groupid=0, jobs=1): err= 0: pid=2379815: Mon Oct 28 05:01:54 2024 00:26:05.364 read: IOPS=189, BW=47.4MiB/s (49.7MB/s)(483MiB/10177msec) 00:26:05.364 slat (usec): min=9, max=423253, avg=3071.46, stdev=24108.06 00:26:05.364 clat (msec): min=2, max=1484, avg=333.89, stdev=342.54 00:26:05.364 lat (msec): min=2, max=1484, avg=336.96, stdev=345.67 00:26:05.364 clat percentiles (msec): 00:26:05.364 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 12], 20.00th=[ 16], 00:26:05.364 | 30.00th=[ 32], 40.00th=[ 96], 50.00th=[ 188], 60.00th=[ 418], 00:26:05.364 | 70.00th=[ 518], 80.00th=[ 634], 90.00th=[ 852], 95.00th=[ 1028], 00:26:05.364 | 99.00th=[ 1133], 99.50th=[ 1200], 99.90th=[ 1485], 99.95th=[ 1485], 00:26:05.364 | 99.99th=[ 1485] 00:26:05.364 bw ( KiB/s): min= 8175, max=212992, per=5.55%, avg=47794.35, stdev=58017.78, samples=20 00:26:05.364 iops : min= 31, max= 832, avg=186.65, stdev=226.67, samples=20 00:26:05.364 lat (msec) : 4=6.37%, 10=2.54%, 20=20.04%, 50=2.80%, 100=9.74% 00:26:05.364 lat (msec) : 250=13.72%, 500=13.98%, 750=14.97%, 1000=10.72%, 2000=5.13% 00:26:05.364 cpu : usr=0.12%, sys=0.81%, ctx=755, majf=0, minf=4097 00:26:05.364 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:05.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.364 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.364 issued rwts: total=1931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.364 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.364 job1: (groupid=0, jobs=1): err= 0: pid=2379816: Mon Oct 28 05:01:54 2024 00:26:05.364 read: IOPS=181, BW=45.3MiB/s (47.5MB/s)(460MiB/10170msec) 00:26:05.364 slat (usec): min=9, max=655422, avg=4306.98, stdev=28154.25 00:26:05.364 clat (msec): min=3, max=1669, avg=348.84, stdev=310.93 00:26:05.364 lat (msec): min=3, max=1669, avg=353.15, stdev=313.91 00:26:05.364 clat percentiles (msec): 00:26:05.364 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 32], 20.00th=[ 84], 00:26:05.364 | 30.00th=[ 146], 40.00th=[ 190], 50.00th=[ 271], 60.00th=[ 422], 00:26:05.364 | 70.00th=[ 477], 80.00th=[ 550], 90.00th=[ 659], 95.00th=[ 902], 00:26:05.364 | 99.00th=[ 1536], 99.50th=[ 1653], 99.90th=[ 1653], 99.95th=[ 1670], 00:26:05.364 | 99.99th=[ 1670] 00:26:05.364 bw ( KiB/s): min= 2560, max=142336, per=5.29%, avg=45513.30, stdev=36021.54, samples=20 00:26:05.364 iops : min= 10, max= 556, avg=177.75, stdev=140.72, samples=20 00:26:05.364 lat (msec) : 4=0.22%, 10=2.88%, 20=5.32%, 50=4.40%, 100=9.78% 00:26:05.364 lat (msec) : 250=27.16%, 500=22.60%, 750=20.91%, 1000=3.10%, 2000=3.64% 00:26:05.364 cpu : usr=0.08%, sys=0.56%, ctx=355, majf=0, minf=4098 00:26:05.364 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:26:05.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.364 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.364 issued rwts: total=1841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.364 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.364 job2: (groupid=0, jobs=1): err= 0: pid=2379817: Mon Oct 28 05:01:54 2024 00:26:05.364 read: IOPS=208, BW=52.0MiB/s (54.6MB/s)(529MiB/10171msec) 00:26:05.364 slat (usec): min=9, max=457032, avg=3080.76, stdev=21825.69 00:26:05.364 clat (usec): min=881, max=1318.8k, avg=304136.49, stdev=283872.65 00:26:05.364 lat (usec): min=904, max=1318.8k, avg=307217.25, stdev=286292.09 00:26:05.364 clat percentiles (usec): 00:26:05.364 | 1.00th=[ 1401], 5.00th=[ 20841], 10.00th=[ 24773], 00:26:05.364 | 20.00th=[ 34341], 30.00th=[ 89654], 40.00th=[ 116917], 00:26:05.364 | 50.00th=[ 221250], 60.00th=[ 346031], 70.00th=[ 459277], 00:26:05.364 | 80.00th=[ 549454], 90.00th=[ 658506], 95.00th=[ 851444], 00:26:05.364 | 99.00th=[1098908], 99.50th=[1115685], 99.90th=[1317012], 00:26:05.364 | 99.95th=[1317012], 99.99th=[1317012] 00:26:05.364 bw ( KiB/s): min= 1536, max=229835, per=6.11%, avg=52570.35, stdev=49465.62, samples=20 00:26:05.364 iops : min= 6, max= 897, avg=205.30, stdev=193.06, samples=20 00:26:05.364 lat (usec) : 1000=0.19% 00:26:05.364 lat (msec) : 2=1.70%, 4=0.28%, 10=1.13%, 20=1.23%, 50=19.18% 00:26:05.364 lat (msec) : 100=9.83%, 250=17.62%, 500=24.33%, 750=17.24%, 1000=3.16% 00:26:05.364 lat (msec) : 2000=4.11% 00:26:05.364 cpu : usr=0.09%, sys=0.62%, ctx=446, majf=0, minf=4097 00:26:05.364 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:05.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.364 issued rwts: total=2117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.364 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.364 job3: (groupid=0, jobs=1): err= 0: pid=2379818: Mon Oct 28 05:01:54 2024 00:26:05.364 read: IOPS=185, BW=46.5MiB/s (48.7MB/s)(473MiB/10172msec) 00:26:05.364 slat (usec): min=10, max=390430, avg=5086.42, stdev=26327.48 00:26:05.364 clat (msec): min=15, max=1234, avg=339.00, stdev=266.90 00:26:05.364 lat (msec): min=15, max=1234, avg=344.08, stdev=271.01 00:26:05.364 clat percentiles (msec): 00:26:05.364 | 1.00th=[ 27], 5.00th=[ 39], 10.00th=[ 79], 20.00th=[ 106], 00:26:05.364 | 30.00th=[ 130], 40.00th=[ 180], 50.00th=[ 245], 60.00th=[ 351], 00:26:05.364 | 70.00th=[ 468], 80.00th=[ 592], 90.00th=[ 743], 95.00th=[ 860], 00:26:05.364 | 99.00th=[ 1099], 99.50th=[ 1133], 99.90th=[ 1217], 99.95th=[ 1234], 00:26:05.364 | 99.99th=[ 1234] 00:26:05.364 bw ( KiB/s): min=12288, max=162304, per=5.43%, avg=46768.15, stdev=44398.69, samples=20 00:26:05.364 iops : min= 48, max= 634, avg=182.65, stdev=173.45, samples=20 00:26:05.364 lat (msec) : 20=0.26%, 50=6.40%, 100=9.84%, 250=35.19%, 500=21.11% 00:26:05.364 lat (msec) : 750=17.51%, 1000=8.15%, 2000=1.53% 00:26:05.364 cpu : usr=0.16%, sys=0.66%, ctx=243, majf=0, minf=3722 00:26:05.364 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:05.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.364 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.364 issued rwts: total=1890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.364 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.364 job4: (groupid=0, jobs=1): err= 0: pid=2379819: Mon Oct 28 05:01:54 2024 00:26:05.364 read: IOPS=335, BW=83.8MiB/s (87.8MB/s)(843MiB/10065msec) 00:26:05.364 slat (usec): min=9, max=388750, avg=1816.54, stdev=12331.91 00:26:05.364 clat (usec): min=1982, max=871302, avg=189000.44, stdev=171000.62 00:26:05.364 lat (msec): min=2, max=1168, avg=190.82, stdev=172.98 00:26:05.364 clat percentiles (msec): 00:26:05.364 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 48], 20.00th=[ 71], 00:26:05.364 | 30.00th=[ 91], 40.00th=[ 102], 50.00th=[ 132], 60.00th=[ 153], 00:26:05.364 | 70.00th=[ 192], 80.00th=[ 255], 90.00th=[ 481], 95.00th=[ 575], 00:26:05.364 | 99.00th=[ 751], 99.50th=[ 810], 99.90th=[ 860], 99.95th=[ 869], 00:26:05.364 | 99.99th=[ 869] 00:26:05.364 bw ( KiB/s): min=20992, max=189952, per=9.84%, avg=84739.30, stdev=49213.31, samples=20 00:26:05.364 iops : min= 82, max= 742, avg=331.00, stdev=192.25, samples=20 00:26:05.364 lat (msec) : 2=0.03%, 4=0.12%, 10=1.19%, 20=1.87%, 50=8.95% 00:26:05.364 lat (msec) : 100=27.51%, 250=40.26%, 500=11.38%, 750=7.56%, 1000=1.13% 00:26:05.364 cpu : usr=0.17%, sys=1.09%, ctx=1072, majf=0, minf=4097 00:26:05.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:26:05.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.364 issued rwts: total=3373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.364 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.364 job5: (groupid=0, jobs=1): err= 0: pid=2379826: Mon Oct 28 05:01:54 2024 00:26:05.364 read: IOPS=434, BW=109MiB/s (114MB/s)(1096MiB/10081msec) 00:26:05.364 slat (usec): min=8, max=393135, avg=1501.34, stdev=14735.38 00:26:05.364 clat (usec): min=1955, max=1060.8k, avg=145628.89, stdev=234809.54 00:26:05.364 lat (msec): min=3, max=1087, avg=147.13, stdev=237.28 00:26:05.364 clat percentiles (msec): 00:26:05.364 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 25], 00:26:05.364 | 30.00th=[ 39], 40.00th=[ 41], 50.00th=[ 43], 60.00th=[ 44], 00:26:05.364 | 70.00th=[ 50], 80.00th=[ 182], 90.00th=[ 575], 95.00th=[ 718], 00:26:05.364 | 99.00th=[ 936], 99.50th=[ 944], 99.90th=[ 1011], 99.95th=[ 1011], 00:26:05.365 | 99.99th=[ 1062] 00:26:05.365 bw ( KiB/s): min= 2052, max=424960, per=12.84%, avg=110544.30, stdev=138839.01, samples=20 00:26:05.365 iops : min= 8, max= 1660, avg=431.80, stdev=542.35, samples=20 00:26:05.365 lat (msec) : 2=0.02%, 4=0.02%, 10=1.14%, 20=13.49%, 50=55.80% 00:26:05.365 lat (msec) : 100=5.59%, 250=6.53%, 500=5.27%, 750=7.53%, 1000=4.45% 00:26:05.365 lat (msec) : 2000=0.16% 00:26:05.365 cpu : usr=0.18%, sys=1.08%, ctx=1036, majf=0, minf=4097 00:26:05.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:05.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.365 issued rwts: total=4382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.365 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.365 job6: (groupid=0, jobs=1): err= 0: pid=2379827: Mon Oct 28 05:01:54 2024 00:26:05.365 read: IOPS=191, BW=47.8MiB/s (50.1MB/s)(486MiB/10171msec) 00:26:05.365 slat (usec): min=13, max=362795, avg=5140.27, stdev=23310.85 00:26:05.365 clat (msec): min=34, max=1158, avg=329.22, stdev=263.78 00:26:05.365 lat (msec): min=34, max=1160, avg=334.36, stdev=267.66 00:26:05.365 clat percentiles (msec): 00:26:05.365 | 1.00th=[ 41], 5.00th=[ 50], 10.00th=[ 54], 20.00th=[ 89], 00:26:05.365 | 30.00th=[ 116], 40.00th=[ 157], 50.00th=[ 284], 60.00th=[ 380], 00:26:05.365 | 70.00th=[ 439], 80.00th=[ 535], 90.00th=[ 701], 95.00th=[ 911], 00:26:05.365 | 99.00th=[ 1036], 99.50th=[ 1083], 99.90th=[ 1167], 99.95th=[ 1167], 00:26:05.365 | 99.99th=[ 1167] 00:26:05.365 bw ( KiB/s): min= 6656, max=220160, per=5.59%, avg=48157.05, stdev=51138.52, samples=20 00:26:05.365 iops : min= 26, max= 860, avg=188.10, stdev=199.76, samples=20 00:26:05.365 lat (msec) : 50=5.40%, 100=17.43%, 250=24.06%, 500=29.82%, 750=14.65% 00:26:05.365 lat (msec) : 1000=7.35%, 2000=1.29% 00:26:05.365 cpu : usr=0.10%, sys=0.85%, ctx=298, majf=0, minf=4097 00:26:05.365 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:26:05.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.365 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.365 issued rwts: total=1945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.365 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.365 job7: (groupid=0, jobs=1): err= 0: pid=2379828: Mon Oct 28 05:01:54 2024 00:26:05.365 read: IOPS=484, BW=121MiB/s (127MB/s)(1220MiB/10064msec) 00:26:05.365 slat (usec): min=9, max=441729, avg=1283.02, stdev=11921.02 00:26:05.365 clat (msec): min=2, max=935, avg=130.58, stdev=182.87 00:26:05.365 lat (msec): min=2, max=959, avg=131.86, stdev=184.60 00:26:05.365 clat percentiles (msec): 00:26:05.365 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 41], 00:26:05.365 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 47], 60.00th=[ 51], 00:26:05.365 | 70.00th=[ 87], 80.00th=[ 140], 90.00th=[ 481], 95.00th=[ 592], 00:26:05.365 | 99.00th=[ 802], 99.50th=[ 885], 99.90th=[ 927], 99.95th=[ 927], 00:26:05.365 | 99.99th=[ 936] 00:26:05.365 bw ( KiB/s): min=13312, max=400896, per=14.33%, avg=123347.20, stdev=131551.90, samples=20 00:26:05.365 iops : min= 52, max= 1566, avg=481.80, stdev=513.89, samples=20 00:26:05.365 lat (msec) : 4=0.08%, 20=0.14%, 50=59.07%, 100=13.69%, 250=13.26% 00:26:05.365 lat (msec) : 500=4.67%, 750=7.23%, 1000=1.86% 00:26:05.365 cpu : usr=0.31%, sys=1.50%, ctx=953, majf=0, minf=4097 00:26:05.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:05.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.365 issued rwts: total=4881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.365 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.365 job8: (groupid=0, jobs=1): err= 0: pid=2379829: Mon Oct 28 05:01:54 2024 00:26:05.365 read: IOPS=130, BW=32.6MiB/s (34.2MB/s)(329MiB/10082msec) 00:26:05.365 slat (usec): min=9, max=415934, avg=7420.72, stdev=32899.56 00:26:05.365 clat (msec): min=49, max=1346, avg=482.82, stdev=222.13 00:26:05.365 lat (msec): min=49, max=1346, avg=490.24, stdev=225.44 00:26:05.365 clat percentiles (msec): 00:26:05.365 | 1.00th=[ 73], 5.00th=[ 146], 10.00th=[ 188], 20.00th=[ 275], 00:26:05.365 | 30.00th=[ 359], 40.00th=[ 418], 50.00th=[ 481], 60.00th=[ 531], 00:26:05.365 | 70.00th=[ 592], 80.00th=[ 659], 90.00th=[ 735], 95.00th=[ 827], 00:26:05.365 | 99.00th=[ 1070], 99.50th=[ 1200], 99.90th=[ 1351], 99.95th=[ 1351], 00:26:05.365 | 99.99th=[ 1351] 00:26:05.365 bw ( KiB/s): min=10752, max=76288, per=3.72%, avg=32047.80, stdev=14970.58, samples=20 00:26:05.365 iops : min= 42, max= 298, avg=125.15, stdev=58.47, samples=20 00:26:05.365 lat (msec) : 50=0.15%, 100=2.59%, 250=14.37%, 500=37.95%, 750=36.27% 00:26:05.365 lat (msec) : 1000=6.01%, 2000=2.66% 00:26:05.365 cpu : usr=0.04%, sys=0.45%, ctx=159, majf=0, minf=4097 00:26:05.365 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:26:05.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.365 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.365 issued rwts: total=1315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.365 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.365 job9: (groupid=0, jobs=1): err= 0: pid=2379830: Mon Oct 28 05:01:54 2024 00:26:05.365 read: IOPS=497, BW=124MiB/s (130MB/s)(1253MiB/10079msec) 00:26:05.365 slat (usec): min=8, max=344203, avg=1548.21, stdev=10218.82 00:26:05.365 clat (usec): min=1516, max=883771, avg=127077.10, stdev=157453.49 00:26:05.365 lat (usec): min=1556, max=883794, avg=128625.31, stdev=159392.22 00:26:05.365 clat percentiles (msec): 00:26:05.365 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 26], 20.00th=[ 35], 00:26:05.365 | 30.00th=[ 42], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 71], 00:26:05.365 | 70.00th=[ 100], 80.00th=[ 174], 90.00th=[ 368], 95.00th=[ 531], 00:26:05.365 | 99.00th=[ 676], 99.50th=[ 743], 99.90th=[ 877], 99.95th=[ 877], 00:26:05.365 | 99.99th=[ 885] 00:26:05.365 bw ( KiB/s): min=12288, max=458240, per=14.71%, avg=126635.45, stdev=108867.33, samples=20 00:26:05.365 iops : min= 48, max= 1790, avg=494.65, stdev=425.27, samples=20 00:26:05.365 lat (msec) : 2=0.04%, 4=0.12%, 10=4.95%, 20=2.97%, 50=26.03% 00:26:05.365 lat (msec) : 100=36.01%, 250=16.39%, 500=7.13%, 750=5.93%, 1000=0.44% 00:26:05.365 cpu : usr=0.19%, sys=1.40%, ctx=1403, majf=0, minf=4098 00:26:05.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:26:05.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.365 issued rwts: total=5010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.365 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.365 job10: (groupid=0, jobs=1): err= 0: pid=2379831: Mon Oct 28 05:01:54 2024 00:26:05.365 read: IOPS=545, BW=136MiB/s (143MB/s)(1386MiB/10171msec) 00:26:05.365 slat (usec): min=9, max=368768, avg=1590.85, stdev=10212.11 00:26:05.365 clat (usec): min=1730, max=1134.8k, avg=115692.05, stdev=155532.34 00:26:05.365 lat (usec): min=1842, max=1134.8k, avg=117282.90, stdev=157606.41 00:26:05.365 clat percentiles (msec): 00:26:05.365 | 1.00th=[ 14], 5.00th=[ 26], 10.00th=[ 39], 20.00th=[ 45], 00:26:05.365 | 30.00th=[ 50], 40.00th=[ 54], 50.00th=[ 61], 60.00th=[ 69], 00:26:05.365 | 70.00th=[ 83], 80.00th=[ 112], 90.00th=[ 326], 95.00th=[ 485], 00:26:05.365 | 99.00th=[ 818], 99.50th=[ 927], 99.90th=[ 1083], 99.95th=[ 1083], 00:26:05.365 | 99.99th=[ 1133] 00:26:05.365 bw ( KiB/s): min=20480, max=328704, per=16.29%, avg=140296.65, stdev=110512.66, samples=20 00:26:05.365 iops : min= 80, max= 1284, avg=548.00, stdev=431.69, samples=20 00:26:05.365 lat (msec) : 2=0.04%, 4=0.76%, 10=0.09%, 20=1.33%, 50=30.39% 00:26:05.365 lat (msec) : 100=43.90%, 250=11.58%, 500=7.32%, 750=3.28%, 1000=0.97% 00:26:05.365 lat (msec) : 2000=0.34% 00:26:05.365 cpu : usr=0.29%, sys=1.89%, ctx=957, majf=0, minf=4097 00:26:05.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:05.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.365 issued rwts: total=5545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.365 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.365 00:26:05.365 Run status group 0 (all jobs): 00:26:05.365 READ: bw=841MiB/s (882MB/s), 32.6MiB/s-136MiB/s (34.2MB/s-143MB/s), io=8558MiB (8973MB), run=10064-10177msec 00:26:05.365 00:26:05.365 Disk stats (read/write): 00:26:05.365 nvme0n1: ios=3795/0, merge=0/0, ticks=1254183/0, in_queue=1254183, util=97.13% 00:26:05.365 nvme10n1: ios=3534/0, merge=0/0, ticks=1202701/0, in_queue=1202701, util=97.27% 00:26:05.365 nvme1n1: ios=4229/0, merge=0/0, ticks=1274100/0, in_queue=1274100, util=97.63% 00:26:05.365 nvme2n1: ios=3767/0, merge=0/0, ticks=1264397/0, in_queue=1264397, util=97.78% 00:26:05.365 nvme3n1: ios=6558/0, merge=0/0, ticks=1233188/0, in_queue=1233188, util=97.79% 00:26:05.365 nvme4n1: ios=8571/0, merge=0/0, ticks=1225514/0, in_queue=1225514, util=98.15% 00:26:05.365 nvme5n1: ios=3846/0, merge=0/0, ticks=1259013/0, in_queue=1259013, util=98.37% 00:26:05.365 nvme6n1: ios=9562/0, merge=0/0, ticks=1244179/0, in_queue=1244179, util=98.44% 00:26:05.365 nvme7n1: ios=2456/0, merge=0/0, ticks=1232019/0, in_queue=1232019, util=98.91% 00:26:05.365 nvme8n1: ios=9845/0, merge=0/0, ticks=1235331/0, in_queue=1235331, util=99.12% 00:26:05.365 nvme9n1: ios=11089/0, merge=0/0, ticks=1272758/0, in_queue=1272758, util=99.27% 00:26:05.365 05:01:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:05.365 [global] 00:26:05.365 thread=1 00:26:05.365 invalidate=1 00:26:05.365 rw=randwrite 00:26:05.365 time_based=1 00:26:05.365 runtime=10 00:26:05.365 ioengine=libaio 00:26:05.365 direct=1 00:26:05.365 bs=262144 00:26:05.365 iodepth=64 00:26:05.365 norandommap=1 00:26:05.365 numjobs=1 00:26:05.365 00:26:05.365 [job0] 00:26:05.365 filename=/dev/nvme0n1 00:26:05.365 [job1] 00:26:05.365 filename=/dev/nvme10n1 00:26:05.365 [job2] 00:26:05.365 filename=/dev/nvme1n1 00:26:05.365 [job3] 00:26:05.365 filename=/dev/nvme2n1 00:26:05.365 [job4] 00:26:05.365 filename=/dev/nvme3n1 00:26:05.365 [job5] 00:26:05.365 filename=/dev/nvme4n1 00:26:05.365 [job6] 00:26:05.365 filename=/dev/nvme5n1 00:26:05.365 [job7] 00:26:05.365 filename=/dev/nvme6n1 00:26:05.365 [job8] 00:26:05.365 filename=/dev/nvme7n1 00:26:05.365 [job9] 00:26:05.365 filename=/dev/nvme8n1 00:26:05.365 [job10] 00:26:05.365 filename=/dev/nvme9n1 00:26:05.365 Could not set queue depth (nvme0n1) 00:26:05.365 Could not set queue depth (nvme10n1) 00:26:05.365 Could not set queue depth (nvme1n1) 00:26:05.365 Could not set queue depth (nvme2n1) 00:26:05.365 Could not set queue depth (nvme3n1) 00:26:05.365 Could not set queue depth (nvme4n1) 00:26:05.365 Could not set queue depth (nvme5n1) 00:26:05.366 Could not set queue depth (nvme6n1) 00:26:05.366 Could not set queue depth (nvme7n1) 00:26:05.366 Could not set queue depth (nvme8n1) 00:26:05.366 Could not set queue depth (nvme9n1) 00:26:05.366 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.366 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.366 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.366 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.366 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.366 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.366 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.366 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.366 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.366 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.366 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:05.366 fio-3.35 00:26:05.366 Starting 11 threads 00:26:15.340 00:26:15.340 job0: (groupid=0, jobs=1): err= 0: pid=2380522: Mon Oct 28 05:02:05 2024 00:26:15.340 write: IOPS=199, BW=49.9MiB/s (52.3MB/s)(509MiB/10208msec); 0 zone resets 00:26:15.340 slat (usec): min=26, max=182160, avg=4368.00, stdev=10090.85 00:26:15.340 clat (msec): min=14, max=649, avg=316.15, stdev=128.10 00:26:15.340 lat (msec): min=16, max=659, avg=320.52, stdev=129.78 00:26:15.340 clat percentiles (msec): 00:26:15.340 | 1.00th=[ 40], 5.00th=[ 123], 10.00th=[ 140], 20.00th=[ 182], 00:26:15.340 | 30.00th=[ 251], 40.00th=[ 288], 50.00th=[ 317], 60.00th=[ 355], 00:26:15.340 | 70.00th=[ 401], 80.00th=[ 435], 90.00th=[ 481], 95.00th=[ 514], 00:26:15.340 | 99.00th=[ 584], 99.50th=[ 600], 99.90th=[ 634], 99.95th=[ 642], 00:26:15.340 | 99.99th=[ 651] 00:26:15.340 bw ( KiB/s): min=26112, max=108544, per=5.12%, avg=50518.25, stdev=22471.72, samples=20 00:26:15.340 iops : min= 102, max= 424, avg=197.30, stdev=87.74, samples=20 00:26:15.340 lat (msec) : 20=0.15%, 50=1.37%, 100=1.96%, 250=26.07%, 500=64.11% 00:26:15.340 lat (msec) : 750=6.33% 00:26:15.340 cpu : usr=0.67%, sys=0.64%, ctx=733, majf=0, minf=1 00:26:15.340 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:15.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.340 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.340 issued rwts: total=0,2037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.340 job1: (groupid=0, jobs=1): err= 0: pid=2380523: Mon Oct 28 05:02:05 2024 00:26:15.340 write: IOPS=450, BW=113MiB/s (118MB/s)(1155MiB/10244msec); 0 zone resets 00:26:15.340 slat (usec): min=16, max=196152, avg=1405.37, stdev=5820.70 00:26:15.340 clat (usec): min=903, max=573173, avg=140431.76, stdev=118594.01 00:26:15.340 lat (usec): min=923, max=573629, avg=141837.13, stdev=119752.99 00:26:15.340 clat percentiles (msec): 00:26:15.340 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 15], 20.00th=[ 43], 00:26:15.340 | 30.00th=[ 62], 40.00th=[ 81], 50.00th=[ 104], 60.00th=[ 129], 00:26:15.340 | 70.00th=[ 180], 80.00th=[ 245], 90.00th=[ 326], 95.00th=[ 388], 00:26:15.340 | 99.00th=[ 451], 99.50th=[ 518], 99.90th=[ 567], 99.95th=[ 567], 00:26:15.340 | 99.99th=[ 575] 00:26:15.340 bw ( KiB/s): min=38912, max=253440, per=11.82%, avg=116605.60, stdev=71131.23, samples=20 00:26:15.340 iops : min= 152, max= 990, avg=455.45, stdev=277.88, samples=20 00:26:15.340 lat (usec) : 1000=0.04% 00:26:15.340 lat (msec) : 2=0.52%, 4=1.97%, 10=6.21%, 20=3.33%, 50=11.95% 00:26:15.340 lat (msec) : 100=24.66%, 250=32.27%, 500=18.38%, 750=0.65% 00:26:15.340 cpu : usr=1.32%, sys=1.57%, ctx=2757, majf=0, minf=1 00:26:15.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:15.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.340 issued rwts: total=0,4618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.340 job2: (groupid=0, jobs=1): err= 0: pid=2380524: Mon Oct 28 05:02:05 2024 00:26:15.340 write: IOPS=319, BW=79.8MiB/s (83.7MB/s)(815MiB/10209msec); 0 zone resets 00:26:15.340 slat (usec): min=21, max=179377, avg=2305.14, stdev=7653.78 00:26:15.340 clat (msec): min=2, max=622, avg=198.12, stdev=162.97 00:26:15.340 lat (msec): min=2, max=623, avg=200.43, stdev=164.82 00:26:15.340 clat percentiles (msec): 00:26:15.340 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 41], 20.00th=[ 47], 00:26:15.340 | 30.00th=[ 54], 40.00th=[ 91], 50.00th=[ 128], 60.00th=[ 243], 00:26:15.340 | 70.00th=[ 300], 80.00th=[ 359], 90.00th=[ 443], 95.00th=[ 493], 00:26:15.340 | 99.00th=[ 575], 99.50th=[ 584], 99.90th=[ 600], 99.95th=[ 625], 00:26:15.340 | 99.99th=[ 625] 00:26:15.340 bw ( KiB/s): min=28672, max=307712, per=8.29%, avg=81788.30, stdev=71211.62, samples=20 00:26:15.340 iops : min= 112, max= 1202, avg=319.45, stdev=278.19, samples=20 00:26:15.340 lat (msec) : 4=0.28%, 10=3.01%, 20=3.07%, 50=18.97%, 100=18.48% 00:26:15.340 lat (msec) : 250=19.21%, 500=32.32%, 750=4.67% 00:26:15.340 cpu : usr=1.16%, sys=1.11%, ctx=1734, majf=0, minf=1 00:26:15.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:15.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.340 issued rwts: total=0,3258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.340 job3: (groupid=0, jobs=1): err= 0: pid=2380536: Mon Oct 28 05:02:05 2024 00:26:15.340 write: IOPS=440, BW=110MiB/s (115MB/s)(1124MiB/10216msec); 0 zone resets 00:26:15.340 slat (usec): min=24, max=142882, avg=1763.15, stdev=5712.82 00:26:15.340 clat (msec): min=2, max=622, avg=143.52, stdev=119.88 00:26:15.341 lat (msec): min=2, max=622, avg=145.29, stdev=121.37 00:26:15.341 clat percentiles (msec): 00:26:15.341 | 1.00th=[ 14], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 51], 00:26:15.341 | 30.00th=[ 57], 40.00th=[ 91], 50.00th=[ 106], 60.00th=[ 129], 00:26:15.341 | 70.00th=[ 159], 80.00th=[ 199], 90.00th=[ 355], 95.00th=[ 426], 00:26:15.341 | 99.00th=[ 510], 99.50th=[ 542], 99.90th=[ 592], 99.95th=[ 617], 00:26:15.341 | 99.99th=[ 625] 00:26:15.341 bw ( KiB/s): min=26624, max=268800, per=11.50%, avg=113456.70, stdev=77427.38, samples=20 00:26:15.341 iops : min= 104, max= 1050, avg=443.10, stdev=302.28, samples=20 00:26:15.341 lat (msec) : 4=0.09%, 10=0.47%, 20=1.60%, 50=17.19%, 100=26.27% 00:26:15.341 lat (msec) : 250=38.92%, 500=14.03%, 750=1.42% 00:26:15.341 cpu : usr=1.43%, sys=1.41%, ctx=1913, majf=0, minf=1 00:26:15.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:15.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.341 issued rwts: total=0,4496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.341 job4: (groupid=0, jobs=1): err= 0: pid=2380537: Mon Oct 28 05:02:05 2024 00:26:15.341 write: IOPS=413, BW=103MiB/s (109MB/s)(1063MiB/10267msec); 0 zone resets 00:26:15.341 slat (usec): min=14, max=76717, avg=1497.85, stdev=5188.65 00:26:15.341 clat (usec): min=1127, max=718990, avg=152961.94, stdev=141156.04 00:26:15.341 lat (usec): min=1164, max=719074, avg=154459.80, stdev=142380.74 00:26:15.341 clat percentiles (msec): 00:26:15.341 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 47], 20.00th=[ 52], 00:26:15.341 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 114], 00:26:15.341 | 70.00th=[ 203], 80.00th=[ 275], 90.00th=[ 388], 95.00th=[ 447], 00:26:15.341 | 99.00th=[ 550], 99.50th=[ 575], 99.90th=[ 693], 99.95th=[ 693], 00:26:15.341 | 99.99th=[ 718] 00:26:15.341 bw ( KiB/s): min=33792, max=270848, per=10.86%, avg=107185.30, stdev=79839.44, samples=20 00:26:15.341 iops : min= 132, max= 1058, avg=418.65, stdev=311.91, samples=20 00:26:15.341 lat (msec) : 2=0.19%, 4=0.33%, 10=1.44%, 20=2.40%, 50=11.46% 00:26:15.341 lat (msec) : 100=40.05%, 250=19.55%, 500=22.54%, 750=2.05% 00:26:15.341 cpu : usr=1.32%, sys=1.10%, ctx=2121, majf=0, minf=1 00:26:15.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:15.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.341 issued rwts: total=0,4250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.341 job5: (groupid=0, jobs=1): err= 0: pid=2380538: Mon Oct 28 05:02:05 2024 00:26:15.341 write: IOPS=465, BW=116MiB/s (122MB/s)(1189MiB/10210msec); 0 zone resets 00:26:15.341 slat (usec): min=16, max=203566, avg=1185.20, stdev=5214.93 00:26:15.341 clat (usec): min=1049, max=611099, avg=135712.85, stdev=130765.20 00:26:15.341 lat (usec): min=1080, max=626329, avg=136898.05, stdev=131855.69 00:26:15.341 clat percentiles (msec): 00:26:15.341 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 10], 20.00th=[ 33], 00:26:15.341 | 30.00th=[ 45], 40.00th=[ 56], 50.00th=[ 87], 60.00th=[ 110], 00:26:15.341 | 70.00th=[ 165], 80.00th=[ 262], 90.00th=[ 355], 95.00th=[ 401], 00:26:15.341 | 99.00th=[ 523], 99.50th=[ 531], 99.90th=[ 567], 99.95th=[ 575], 00:26:15.341 | 99.99th=[ 609] 00:26:15.341 bw ( KiB/s): min=44032, max=276480, per=12.18%, avg=120159.50, stdev=63105.41, samples=20 00:26:15.341 iops : min= 172, max= 1080, avg=469.30, stdev=246.50, samples=20 00:26:15.341 lat (msec) : 2=0.44%, 4=2.42%, 10=7.38%, 20=6.47%, 50=19.87% 00:26:15.341 lat (msec) : 100=20.62%, 250=21.40%, 500=20.22%, 750=1.18% 00:26:15.341 cpu : usr=1.14%, sys=1.56%, ctx=3034, majf=0, minf=1 00:26:15.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:15.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.341 issued rwts: total=0,4757,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.341 job6: (groupid=0, jobs=1): err= 0: pid=2380539: Mon Oct 28 05:02:05 2024 00:26:15.341 write: IOPS=211, BW=52.9MiB/s (55.4MB/s)(543MiB/10270msec); 0 zone resets 00:26:15.341 slat (usec): min=25, max=129383, avg=3831.68, stdev=9518.96 00:26:15.341 clat (msec): min=9, max=672, avg=298.50, stdev=138.29 00:26:15.341 lat (msec): min=9, max=672, avg=302.33, stdev=140.35 00:26:15.341 clat percentiles (msec): 00:26:15.341 | 1.00th=[ 61], 5.00th=[ 118], 10.00th=[ 144], 20.00th=[ 184], 00:26:15.341 | 30.00th=[ 207], 40.00th=[ 239], 50.00th=[ 262], 60.00th=[ 300], 00:26:15.341 | 70.00th=[ 368], 80.00th=[ 439], 90.00th=[ 493], 95.00th=[ 575], 00:26:15.341 | 99.00th=[ 651], 99.50th=[ 667], 99.90th=[ 667], 99.95th=[ 676], 00:26:15.341 | 99.99th=[ 676] 00:26:15.341 bw ( KiB/s): min=30720, max=88064, per=5.47%, avg=53956.20, stdev=20976.83, samples=20 00:26:15.341 iops : min= 120, max= 344, avg=210.75, stdev=81.91, samples=20 00:26:15.341 lat (msec) : 10=0.05%, 20=0.55%, 50=0.32%, 100=2.30%, 250=41.80% 00:26:15.341 lat (msec) : 500=45.95%, 750=9.02% 00:26:15.341 cpu : usr=0.68%, sys=0.79%, ctx=892, majf=0, minf=1 00:26:15.341 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:15.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.341 issued rwts: total=0,2172,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.341 job7: (groupid=0, jobs=1): err= 0: pid=2380540: Mon Oct 28 05:02:05 2024 00:26:15.341 write: IOPS=368, BW=92.2MiB/s (96.7MB/s)(947MiB/10271msec); 0 zone resets 00:26:15.341 slat (usec): min=19, max=57311, avg=1872.95, stdev=5328.40 00:26:15.341 clat (usec): min=931, max=666227, avg=171444.30, stdev=125055.88 00:26:15.341 lat (usec): min=986, max=666289, avg=173317.25, stdev=126391.60 00:26:15.341 clat percentiles (msec): 00:26:15.341 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 27], 20.00th=[ 68], 00:26:15.341 | 30.00th=[ 94], 40.00th=[ 108], 50.00th=[ 132], 60.00th=[ 184], 00:26:15.341 | 70.00th=[ 232], 80.00th=[ 271], 90.00th=[ 351], 95.00th=[ 409], 00:26:15.341 | 99.00th=[ 527], 99.50th=[ 567], 99.90th=[ 642], 99.95th=[ 667], 00:26:15.341 | 99.99th=[ 667] 00:26:15.341 bw ( KiB/s): min=38912, max=208384, per=9.67%, avg=95360.50, stdev=49148.21, samples=20 00:26:15.341 iops : min= 152, max= 814, avg=372.45, stdev=192.05, samples=20 00:26:15.341 lat (usec) : 1000=0.05% 00:26:15.341 lat (msec) : 2=0.61%, 4=1.21%, 10=4.43%, 20=2.24%, 50=7.55% 00:26:15.341 lat (msec) : 100=17.89%, 250=40.35%, 500=24.28%, 750=1.37% 00:26:15.341 cpu : usr=1.18%, sys=1.30%, ctx=2028, majf=0, minf=1 00:26:15.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:15.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.341 issued rwts: total=0,3789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.341 job8: (groupid=0, jobs=1): err= 0: pid=2380541: Mon Oct 28 05:02:05 2024 00:26:15.341 write: IOPS=308, BW=77.1MiB/s (80.9MB/s)(788MiB/10215msec); 0 zone resets 00:26:15.341 slat (usec): min=16, max=66988, avg=1466.53, stdev=5700.13 00:26:15.341 clat (usec): min=1270, max=517751, avg=205747.42, stdev=140373.32 00:26:15.341 lat (usec): min=1314, max=517783, avg=207213.95, stdev=141741.70 00:26:15.341 clat percentiles (msec): 00:26:15.341 | 1.00th=[ 4], 5.00th=[ 21], 10.00th=[ 36], 20.00th=[ 65], 00:26:15.341 | 30.00th=[ 95], 40.00th=[ 136], 50.00th=[ 182], 60.00th=[ 241], 00:26:15.341 | 70.00th=[ 284], 80.00th=[ 359], 90.00th=[ 414], 95.00th=[ 447], 00:26:15.341 | 99.00th=[ 493], 99.50th=[ 506], 99.90th=[ 510], 99.95th=[ 518], 00:26:15.341 | 99.99th=[ 518] 00:26:15.341 bw ( KiB/s): min=32768, max=154112, per=8.01%, avg=79059.75, stdev=32671.11, samples=20 00:26:15.341 iops : min= 128, max= 602, avg=308.80, stdev=127.61, samples=20 00:26:15.341 lat (msec) : 2=0.29%, 4=1.17%, 10=1.43%, 20=2.06%, 50=10.47% 00:26:15.341 lat (msec) : 100=15.70%, 250=32.11%, 500=35.95%, 750=0.82% 00:26:15.341 cpu : usr=1.02%, sys=1.08%, ctx=2381, majf=0, minf=1 00:26:15.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:15.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.341 issued rwts: total=0,3152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.341 job9: (groupid=0, jobs=1): err= 0: pid=2380542: Mon Oct 28 05:02:05 2024 00:26:15.341 write: IOPS=267, BW=66.8MiB/s (70.1MB/s)(687MiB/10270msec); 0 zone resets 00:26:15.341 slat (usec): min=22, max=102632, avg=2439.87, stdev=7364.69 00:26:15.341 clat (msec): min=4, max=674, avg=236.68, stdev=149.51 00:26:15.341 lat (msec): min=4, max=674, avg=239.12, stdev=151.49 00:26:15.341 clat percentiles (msec): 00:26:15.341 | 1.00th=[ 14], 5.00th=[ 27], 10.00th=[ 45], 20.00th=[ 85], 00:26:15.341 | 30.00th=[ 120], 40.00th=[ 174], 50.00th=[ 243], 60.00th=[ 288], 00:26:15.341 | 70.00th=[ 321], 80.00th=[ 376], 90.00th=[ 447], 95.00th=[ 485], 00:26:15.341 | 99.00th=[ 592], 99.50th=[ 617], 99.90th=[ 651], 99.95th=[ 676], 00:26:15.341 | 99.99th=[ 676] 00:26:15.341 bw ( KiB/s): min=32768, max=189572, per=6.96%, avg=68644.50, stdev=39717.38, samples=20 00:26:15.341 iops : min= 128, max= 740, avg=268.10, stdev=155.07, samples=20 00:26:15.341 lat (msec) : 10=0.33%, 20=2.26%, 50=8.81%, 100=15.08%, 250=24.65% 00:26:15.341 lat (msec) : 500=45.05%, 750=3.82% 00:26:15.341 cpu : usr=0.85%, sys=1.03%, ctx=1642, majf=0, minf=1 00:26:15.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:15.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.341 issued rwts: total=0,2746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.341 job10: (groupid=0, jobs=1): err= 0: pid=2380543: Mon Oct 28 05:02:05 2024 00:26:15.341 write: IOPS=419, BW=105MiB/s (110MB/s)(1077MiB/10261msec); 0 zone resets 00:26:15.341 slat (usec): min=15, max=80620, avg=1579.59, stdev=5522.45 00:26:15.341 clat (usec): min=843, max=734898, avg=150755.33, stdev=131428.89 00:26:15.341 lat (usec): min=917, max=734948, avg=152334.91, stdev=133023.40 00:26:15.341 clat percentiles (msec): 00:26:15.341 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 19], 20.00th=[ 46], 00:26:15.341 | 30.00th=[ 60], 40.00th=[ 93], 50.00th=[ 108], 60.00th=[ 142], 00:26:15.341 | 70.00th=[ 188], 80.00th=[ 245], 90.00th=[ 359], 95.00th=[ 426], 00:26:15.341 | 99.00th=[ 535], 99.50th=[ 617], 99.90th=[ 709], 99.95th=[ 709], 00:26:15.341 | 99.99th=[ 735] 00:26:15.341 bw ( KiB/s): min=33280, max=233984, per=11.01%, avg=108664.65, stdev=58376.58, samples=20 00:26:15.341 iops : min= 130, max= 914, avg=424.40, stdev=228.13, samples=20 00:26:15.342 lat (usec) : 1000=0.07% 00:26:15.342 lat (msec) : 2=0.46%, 4=3.04%, 10=3.23%, 20=3.92%, 50=12.12% 00:26:15.342 lat (msec) : 100=21.36%, 250=36.65%, 500=16.48%, 750=2.67% 00:26:15.342 cpu : usr=1.20%, sys=1.47%, ctx=2702, majf=0, minf=1 00:26:15.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:15.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:15.342 issued rwts: total=0,4308,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.342 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:15.342 00:26:15.342 Run status group 0 (all jobs): 00:26:15.342 WRITE: bw=963MiB/s (1010MB/s), 49.9MiB/s-116MiB/s (52.3MB/s-122MB/s), io=9896MiB (10.4GB), run=10208-10271msec 00:26:15.342 00:26:15.342 Disk stats (read/write): 00:26:15.342 nvme0n1: ios=49/4043, merge=0/0, ticks=73/1236700, in_queue=1236773, util=97.71% 00:26:15.342 nvme10n1: ios=46/9190, merge=0/0, ticks=2757/1232035, in_queue=1234792, util=99.76% 00:26:15.342 nvme1n1: ios=51/6493, merge=0/0, ticks=377/1243934, in_queue=1244311, util=100.00% 00:26:15.342 nvme2n1: ios=41/8958, merge=0/0, ticks=1137/1220640, in_queue=1221777, util=100.00% 00:26:15.342 nvme3n1: ios=49/8439, merge=0/0, ticks=1083/1239169, in_queue=1240252, util=100.00% 00:26:15.342 nvme4n1: ios=48/9489, merge=0/0, ticks=587/1249338, in_queue=1249925, util=100.00% 00:26:15.342 nvme5n1: ios=43/4278, merge=0/0, ticks=392/1230794, in_queue=1231186, util=100.00% 00:26:15.342 nvme6n1: ios=44/7511, merge=0/0, ticks=1363/1234913, in_queue=1236276, util=100.00% 00:26:15.342 nvme7n1: ios=41/6273, merge=0/0, ticks=976/1254668, in_queue=1255644, util=100.00% 00:26:15.342 nvme8n1: ios=44/5428, merge=0/0, ticks=1408/1233425, in_queue=1234833, util=100.00% 00:26:15.342 nvme9n1: ios=0/8554, merge=0/0, ticks=0/1231656, in_queue=1231656, util=99.08% 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:15.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:15.342 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.342 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.600 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.600 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.600 05:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:15.859 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:15.859 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:15.859 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:16.118 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.118 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:16.376 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:16.376 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:16.376 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:16.376 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:16.376 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:16.376 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:16.376 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:16.634 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:16.634 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:16.634 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.634 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:16.634 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.634 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.635 05:02:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:16.896 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:16.896 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.896 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:17.156 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:17.156 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.156 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:17.414 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:17.414 rmmod nvme_tcp 00:26:17.414 rmmod nvme_fabrics 00:26:17.414 rmmod nvme_keyring 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@515 -- # '[' -n 2375645 ']' 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # killprocess 2375645 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 2375645 ']' 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 2375645 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2375645 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2375645' 00:26:17.414 killing process with pid 2375645 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 2375645 00:26:17.414 05:02:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 2375645 00:26:17.981 05:02:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:17.981 05:02:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:17.981 05:02:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:17.981 05:02:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:17.981 05:02:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-save 00:26:17.981 05:02:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:17.981 05:02:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-restore 00:26:17.981 05:02:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:17.981 05:02:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:17.981 05:02:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.981 05:02:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.981 05:02:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:20.514 00:26:20.514 real 1m1.689s 00:26:20.514 user 3m33.437s 00:26:20.514 sys 0m17.345s 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.514 ************************************ 00:26:20.514 END TEST nvmf_multiconnection 00:26:20.514 ************************************ 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:20.514 ************************************ 00:26:20.514 START TEST nvmf_initiator_timeout 00:26:20.514 ************************************ 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:20.514 * Looking for test storage... 00:26:20.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1689 -- # lcov --version 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.514 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:26:20.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.515 --rc genhtml_branch_coverage=1 00:26:20.515 --rc genhtml_function_coverage=1 00:26:20.515 --rc genhtml_legend=1 00:26:20.515 --rc geninfo_all_blocks=1 00:26:20.515 --rc geninfo_unexecuted_blocks=1 00:26:20.515 00:26:20.515 ' 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:26:20.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.515 --rc genhtml_branch_coverage=1 00:26:20.515 --rc genhtml_function_coverage=1 00:26:20.515 --rc genhtml_legend=1 00:26:20.515 --rc geninfo_all_blocks=1 00:26:20.515 --rc geninfo_unexecuted_blocks=1 00:26:20.515 00:26:20.515 ' 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:26:20.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.515 --rc genhtml_branch_coverage=1 00:26:20.515 --rc genhtml_function_coverage=1 00:26:20.515 --rc genhtml_legend=1 00:26:20.515 --rc geninfo_all_blocks=1 00:26:20.515 --rc geninfo_unexecuted_blocks=1 00:26:20.515 00:26:20.515 ' 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:26:20.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.515 --rc genhtml_branch_coverage=1 00:26:20.515 --rc genhtml_function_coverage=1 00:26:20.515 --rc genhtml_legend=1 00:26:20.515 --rc geninfo_all_blocks=1 00:26:20.515 --rc geninfo_unexecuted_blocks=1 00:26:20.515 00:26:20.515 ' 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:20.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:20.515 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:20.516 05:02:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.419 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:22.420 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:22.420 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:22.420 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:22.420 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # is_hw=yes 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:22.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:26:22.420 00:26:22.420 --- 10.0.0.2 ping statistics --- 00:26:22.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.420 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:26:22.420 00:26:22.420 --- 10.0.0.1 ping statistics --- 00:26:22.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.420 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.420 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # return 0 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # nvmfpid=2383710 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # waitforlisten 2383710 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 2383710 ']' 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:22.421 05:02:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.421 [2024-10-28 05:02:12.854028] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:26:22.421 [2024-10-28 05:02:12.854107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.421 [2024-10-28 05:02:12.997518] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:22.680 [2024-10-28 05:02:13.036745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:22.680 [2024-10-28 05:02:13.085630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.680 [2024-10-28 05:02:13.085702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.680 [2024-10-28 05:02:13.085730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.680 [2024-10-28 05:02:13.085742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.680 [2024-10-28 05:02:13.085753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.680 [2024-10-28 05:02:13.087370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.680 [2024-10-28 05:02:13.087394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:22.680 [2024-10-28 05:02:13.087448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:22.680 [2024-10-28 05:02:13.087451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.614 Malloc0 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.614 Delay0 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.614 [2024-10-28 05:02:13.978463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.614 05:02:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.614 05:02:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.614 05:02:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:23.614 05:02:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.614 05:02:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.614 [2024-10-28 05:02:14.006687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.614 05:02:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.614 05:02:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:24.179 05:02:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:24.179 05:02:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:24.179 05:02:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:24.179 05:02:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:24.179 05:02:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:26.707 05:02:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:26.707 05:02:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:26.707 05:02:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:26.707 05:02:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:26.707 05:02:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:26.707 05:02:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:26.707 05:02:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2384251 00:26:26.707 05:02:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:26.707 05:02:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:26.707 [global] 00:26:26.707 thread=1 00:26:26.707 invalidate=1 00:26:26.707 rw=write 00:26:26.707 time_based=1 00:26:26.707 runtime=60 00:26:26.707 ioengine=libaio 00:26:26.707 direct=1 00:26:26.707 bs=4096 00:26:26.707 iodepth=1 00:26:26.707 norandommap=0 00:26:26.707 numjobs=1 00:26:26.707 00:26:26.707 verify_dump=1 00:26:26.707 verify_backlog=512 00:26:26.707 verify_state_save=0 00:26:26.707 do_verify=1 00:26:26.707 verify=crc32c-intel 00:26:26.707 [job0] 00:26:26.707 filename=/dev/nvme0n1 00:26:26.707 Could not set queue depth (nvme0n1) 00:26:26.707 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:26.707 fio-3.35 00:26:26.707 Starting 1 thread 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.236 true 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.236 true 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.236 true 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.236 true 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.236 05:02:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.516 true 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.516 true 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.516 true 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.516 true 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:32.516 05:02:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2384251 00:27:28.789 00:27:28.789 job0: (groupid=0, jobs=1): err= 0: pid=2384320: Mon Oct 28 05:03:17 2024 00:27:28.789 read: IOPS=282, BW=1130KiB/s (1157kB/s)(66.2MiB/60003msec) 00:27:28.789 slat (usec): min=4, max=9100, avg=17.56, stdev=91.03 00:27:28.789 clat (usec): min=264, max=40820k, avg=3270.56, stdev=313577.22 00:27:28.789 lat (usec): min=275, max=40820k, avg=3288.12, stdev=313577.36 00:27:28.789 clat percentiles (usec): 00:27:28.789 | 1.00th=[ 293], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 326], 00:27:28.789 | 30.00th=[ 334], 40.00th=[ 351], 50.00th=[ 367], 60.00th=[ 379], 00:27:28.789 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 478], 95.00th=[ 515], 00:27:28.789 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:27:28.789 | 99.99th=[42206] 00:27:28.789 write: IOPS=290, BW=1160KiB/s (1188kB/s)(68.0MiB/60003msec); 0 zone resets 00:27:28.789 slat (usec): min=5, max=982, avg=12.75, stdev=11.23 00:27:28.789 clat (usec): min=187, max=403, avg=224.90, stdev=29.52 00:27:28.789 lat (usec): min=193, max=1232, avg=237.66, stdev=35.42 00:27:28.789 clat percentiles (usec): 00:27:28.789 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 206], 00:27:28.789 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 221], 00:27:28.789 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 265], 95.00th=[ 289], 00:27:28.789 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 379], 99.95th=[ 388], 00:27:28.789 | 99.99th=[ 392] 00:27:28.789 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=6054.96, stdev=1446.62, samples=23 00:27:28.789 iops : min= 1024, max= 2048, avg=1513.74, stdev=361.66, samples=23 00:27:28.789 lat (usec) : 250=44.69%, 500=51.89%, 750=2.82%, 1000=0.01% 00:27:28.789 lat (msec) : 4=0.01%, 50=0.59%, >=2000=0.01% 00:27:28.789 cpu : usr=0.42%, sys=0.89%, ctx=34360, majf=0, minf=1 00:27:28.789 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:28.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.789 issued rwts: total=16948,17408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.789 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:28.789 00:27:28.789 Run status group 0 (all jobs): 00:27:28.789 READ: bw=1130KiB/s (1157kB/s), 1130KiB/s-1130KiB/s (1157kB/s-1157kB/s), io=66.2MiB (69.4MB), run=60003-60003msec 00:27:28.789 WRITE: bw=1160KiB/s (1188kB/s), 1160KiB/s-1160KiB/s (1188kB/s-1188kB/s), io=68.0MiB (71.3MB), run=60003-60003msec 00:27:28.789 00:27:28.789 Disk stats (read/write): 00:27:28.789 nvme0n1: ios=17100/17408, merge=0/0, ticks=14908/3771, in_queue=18679, util=99.65% 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:28.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:28.789 nvmf hotplug test: fio successful as expected 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:28.789 rmmod nvme_tcp 00:27:28.789 rmmod nvme_fabrics 00:27:28.789 rmmod nvme_keyring 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@515 -- # '[' -n 2383710 ']' 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # killprocess 2383710 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 2383710 ']' 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 2383710 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2383710 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2383710' 00:27:28.789 killing process with pid 2383710 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 2383710 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 2383710 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-save 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-restore 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:28.789 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.790 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:28.790 05:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.048 05:03:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:29.048 00:27:29.048 real 1m9.044s 00:27:29.048 user 4m14.299s 00:27:29.048 sys 0m7.075s 00:27:29.048 05:03:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:29.048 05:03:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:29.048 ************************************ 00:27:29.048 END TEST nvmf_initiator_timeout 00:27:29.048 ************************************ 00:27:29.048 05:03:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:29.048 05:03:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:29.048 05:03:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:29.048 05:03:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:29.048 05:03:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:31.581 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:31.581 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:31.581 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:31.581 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:31.582 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:31.582 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:31.582 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:31.582 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:31.582 ************************************ 00:27:31.582 START TEST nvmf_perf_adq 00:27:31.582 ************************************ 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:31.582 * Looking for test storage... 00:27:31.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1689 -- # lcov --version 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:27:31.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.582 --rc genhtml_branch_coverage=1 00:27:31.582 --rc genhtml_function_coverage=1 00:27:31.582 --rc genhtml_legend=1 00:27:31.582 --rc geninfo_all_blocks=1 00:27:31.582 --rc geninfo_unexecuted_blocks=1 00:27:31.582 00:27:31.582 ' 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:27:31.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.582 --rc genhtml_branch_coverage=1 00:27:31.582 --rc genhtml_function_coverage=1 00:27:31.582 --rc genhtml_legend=1 00:27:31.582 --rc geninfo_all_blocks=1 00:27:31.582 --rc geninfo_unexecuted_blocks=1 00:27:31.582 00:27:31.582 ' 00:27:31.582 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:27:31.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.583 --rc genhtml_branch_coverage=1 00:27:31.583 --rc genhtml_function_coverage=1 00:27:31.583 --rc genhtml_legend=1 00:27:31.583 --rc geninfo_all_blocks=1 00:27:31.583 --rc geninfo_unexecuted_blocks=1 00:27:31.583 00:27:31.583 ' 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:27:31.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.583 --rc genhtml_branch_coverage=1 00:27:31.583 --rc genhtml_function_coverage=1 00:27:31.583 --rc genhtml_legend=1 00:27:31.583 --rc geninfo_all_blocks=1 00:27:31.583 --rc geninfo_unexecuted_blocks=1 00:27:31.583 00:27:31.583 ' 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:31.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:31.583 05:03:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:33.487 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:33.487 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:33.487 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:33.487 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:33.487 05:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:34.423 05:03:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:36.952 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:42.227 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:42.227 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:42.227 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:42.227 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:42.227 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:42.227 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:42.227 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.227 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.227 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.227 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:42.227 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:42.227 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:42.227 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:42.227 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:42.227 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:42.228 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:42.228 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:42.228 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:42.228 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:42.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:42.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:27:42.228 00:27:42.228 --- 10.0.0.2 ping statistics --- 00:27:42.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.228 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:42.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:42.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:27:42.228 00:27:42.228 --- 10.0.0.1 ping statistics --- 00:27:42.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.228 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:42.228 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=2396368 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 2396368 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2396368 ']' 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:42.229 [2024-10-28 05:03:32.321124] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:27:42.229 [2024-10-28 05:03:32.321215] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.229 [2024-10-28 05:03:32.460073] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:42.229 [2024-10-28 05:03:32.502554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:42.229 [2024-10-28 05:03:32.556338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.229 [2024-10-28 05:03:32.556414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.229 [2024-10-28 05:03:32.556430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.229 [2024-10-28 05:03:32.556444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.229 [2024-10-28 05:03:32.556456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.229 [2024-10-28 05:03:32.558215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.229 [2024-10-28 05:03:32.558272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:42.229 [2024-10-28 05:03:32.558322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:42.229 [2024-10-28 05:03:32.558326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:42.229 [2024-10-28 05:03:32.780658] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.229 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:42.487 Malloc1 00:27:42.487 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.487 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:42.487 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.487 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:42.487 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.487 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:42.487 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.487 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:42.487 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.487 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.487 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.487 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:42.487 [2024-10-28 05:03:32.844651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.487 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.487 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2396400 00:27:42.487 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:42.487 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:44.395 05:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:44.395 05:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.395 05:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.395 05:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.395 05:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:44.395 "tick_rate": 2693500000, 00:27:44.395 "poll_groups": [ 00:27:44.395 { 00:27:44.395 "name": "nvmf_tgt_poll_group_000", 00:27:44.395 "admin_qpairs": 1, 00:27:44.395 "io_qpairs": 1, 00:27:44.395 "current_admin_qpairs": 1, 00:27:44.395 "current_io_qpairs": 1, 00:27:44.395 "pending_bdev_io": 0, 00:27:44.395 "completed_nvme_io": 18817, 00:27:44.395 "transports": [ 00:27:44.395 { 00:27:44.395 "trtype": "TCP" 00:27:44.395 } 00:27:44.395 ] 00:27:44.395 }, 00:27:44.395 { 00:27:44.395 "name": "nvmf_tgt_poll_group_001", 00:27:44.395 "admin_qpairs": 0, 00:27:44.395 "io_qpairs": 1, 00:27:44.395 "current_admin_qpairs": 0, 00:27:44.395 "current_io_qpairs": 1, 00:27:44.395 "pending_bdev_io": 0, 00:27:44.395 "completed_nvme_io": 18561, 00:27:44.395 "transports": [ 00:27:44.395 { 00:27:44.395 "trtype": "TCP" 00:27:44.395 } 00:27:44.395 ] 00:27:44.395 }, 00:27:44.395 { 00:27:44.395 "name": "nvmf_tgt_poll_group_002", 00:27:44.395 "admin_qpairs": 0, 00:27:44.396 "io_qpairs": 1, 00:27:44.396 "current_admin_qpairs": 0, 00:27:44.396 "current_io_qpairs": 1, 00:27:44.396 "pending_bdev_io": 0, 00:27:44.396 "completed_nvme_io": 16615, 00:27:44.396 "transports": [ 00:27:44.396 { 00:27:44.396 "trtype": "TCP" 00:27:44.396 } 00:27:44.396 ] 00:27:44.396 }, 00:27:44.396 { 00:27:44.396 "name": "nvmf_tgt_poll_group_003", 00:27:44.396 "admin_qpairs": 0, 00:27:44.396 "io_qpairs": 1, 00:27:44.396 "current_admin_qpairs": 0, 00:27:44.396 "current_io_qpairs": 1, 00:27:44.396 "pending_bdev_io": 0, 00:27:44.396 "completed_nvme_io": 18237, 00:27:44.396 "transports": [ 00:27:44.396 { 00:27:44.396 "trtype": "TCP" 00:27:44.396 } 00:27:44.396 ] 00:27:44.396 } 00:27:44.396 ] 00:27:44.396 }' 00:27:44.396 05:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:44.396 05:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:44.396 05:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:44.396 05:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:44.396 05:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2396400 00:27:52.507 Initializing NVMe Controllers 00:27:52.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:52.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:52.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:52.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:52.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:52.507 Initialization complete. Launching workers. 00:27:52.507 ======================================================== 00:27:52.507 Latency(us) 00:27:52.507 Device Information : IOPS MiB/s Average min max 00:27:52.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10192.20 39.81 6280.63 3169.11 8373.81 00:27:52.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10366.70 40.49 6174.19 3402.52 7723.84 00:27:52.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9244.70 36.11 6925.33 3192.60 10869.08 00:27:52.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10484.00 40.95 6103.86 2895.15 8651.12 00:27:52.507 ======================================================== 00:27:52.507 Total : 40287.59 157.37 6355.18 2895.15 10869.08 00:27:52.507 00:27:52.507 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:52.507 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:52.507 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:52.507 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:52.507 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:52.507 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:52.507 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:52.507 rmmod nvme_tcp 00:27:52.765 rmmod nvme_fabrics 00:27:52.765 rmmod nvme_keyring 00:27:52.765 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:52.765 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:52.765 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:52.765 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 2396368 ']' 00:27:52.765 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 2396368 00:27:52.765 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2396368 ']' 00:27:52.765 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2396368 00:27:52.765 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:27:52.765 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:52.765 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2396368 00:27:52.765 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:52.765 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:52.765 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2396368' 00:27:52.765 killing process with pid 2396368 00:27:52.765 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2396368 00:27:52.765 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2396368 00:27:53.023 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:53.023 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:53.023 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:53.023 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:53.023 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:27:53.023 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:53.023 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:27:53.023 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:53.023 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:53.023 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.023 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.023 05:03:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.928 05:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:54.928 05:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:54.928 05:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:54.928 05:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:55.863 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:58.391 05:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:03.655 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:03.655 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:03.656 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:03.656 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:03.656 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:03.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:28:03.656 00:28:03.656 --- 10.0.0.2 ping statistics --- 00:28:03.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.656 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:03.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:28:03.656 00:28:03.656 --- 10.0.0.1 ping statistics --- 00:28:03.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.656 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:03.656 net.core.busy_poll = 1 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:03.656 net.core.busy_read = 1 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:03.656 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:03.656 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:03.656 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:03.656 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:03.656 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:03.656 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:03.656 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:03.656 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:03.656 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=2399080 00:28:03.656 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:03.656 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 2399080 00:28:03.656 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2399080 ']' 00:28:03.656 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.656 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:03.656 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.656 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:03.656 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:03.656 [2024-10-28 05:03:54.143336] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:28:03.656 [2024-10-28 05:03:54.143416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.914 [2024-10-28 05:03:54.282417] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:03.914 [2024-10-28 05:03:54.318000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:03.914 [2024-10-28 05:03:54.367819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.914 [2024-10-28 05:03:54.367884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.914 [2024-10-28 05:03:54.367898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.914 [2024-10-28 05:03:54.367910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.914 [2024-10-28 05:03:54.367920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.914 [2024-10-28 05:03:54.369563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.914 [2024-10-28 05:03:54.369631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.914 [2024-10-28 05:03:54.369698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:03.914 [2024-10-28 05:03:54.369702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:04.847 [2024-10-28 05:03:55.362060] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.847 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:04.848 Malloc1 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:04.848 [2024-10-28 05:03:55.432780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2399237 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:04.848 05:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:07.377 05:03:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:07.377 05:03:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.377 05:03:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:07.377 05:03:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.377 05:03:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:07.377 "tick_rate": 2693500000, 00:28:07.377 "poll_groups": [ 00:28:07.378 { 00:28:07.378 "name": "nvmf_tgt_poll_group_000", 00:28:07.378 "admin_qpairs": 1, 00:28:07.378 "io_qpairs": 0, 00:28:07.378 "current_admin_qpairs": 1, 00:28:07.378 "current_io_qpairs": 0, 00:28:07.378 "pending_bdev_io": 0, 00:28:07.378 "completed_nvme_io": 0, 00:28:07.378 "transports": [ 00:28:07.378 { 00:28:07.378 "trtype": "TCP" 00:28:07.378 } 00:28:07.378 ] 00:28:07.378 }, 00:28:07.378 { 00:28:07.378 "name": "nvmf_tgt_poll_group_001", 00:28:07.378 "admin_qpairs": 0, 00:28:07.378 "io_qpairs": 4, 00:28:07.378 "current_admin_qpairs": 0, 00:28:07.378 "current_io_qpairs": 4, 00:28:07.378 "pending_bdev_io": 0, 00:28:07.378 "completed_nvme_io": 31172, 00:28:07.378 "transports": [ 00:28:07.378 { 00:28:07.378 "trtype": "TCP" 00:28:07.378 } 00:28:07.378 ] 00:28:07.378 }, 00:28:07.378 { 00:28:07.378 "name": "nvmf_tgt_poll_group_002", 00:28:07.378 "admin_qpairs": 0, 00:28:07.378 "io_qpairs": 0, 00:28:07.378 "current_admin_qpairs": 0, 00:28:07.378 "current_io_qpairs": 0, 00:28:07.378 "pending_bdev_io": 0, 00:28:07.378 "completed_nvme_io": 0, 00:28:07.378 "transports": [ 00:28:07.378 { 00:28:07.378 "trtype": "TCP" 00:28:07.378 } 00:28:07.378 ] 00:28:07.378 }, 00:28:07.378 { 00:28:07.378 "name": "nvmf_tgt_poll_group_003", 00:28:07.378 "admin_qpairs": 0, 00:28:07.378 "io_qpairs": 0, 00:28:07.378 "current_admin_qpairs": 0, 00:28:07.378 "current_io_qpairs": 0, 00:28:07.378 "pending_bdev_io": 0, 00:28:07.378 "completed_nvme_io": 0, 00:28:07.378 "transports": [ 00:28:07.378 { 00:28:07.378 "trtype": "TCP" 00:28:07.378 } 00:28:07.378 ] 00:28:07.378 } 00:28:07.378 ] 00:28:07.378 }' 00:28:07.378 05:03:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:07.378 05:03:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:07.378 05:03:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:28:07.378 05:03:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:28:07.378 05:03:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2399237 00:28:15.488 Initializing NVMe Controllers 00:28:15.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:15.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:15.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:15.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:15.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:15.488 Initialization complete. Launching workers. 00:28:15.488 ======================================================== 00:28:15.489 Latency(us) 00:28:15.489 Device Information : IOPS MiB/s Average min max 00:28:15.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4323.60 16.89 14809.77 2486.59 62695.39 00:28:15.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4790.00 18.71 13367.48 1936.13 60355.74 00:28:15.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4032.00 15.75 15926.98 1831.55 63996.43 00:28:15.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4201.80 16.41 15247.98 1961.85 65703.06 00:28:15.489 ======================================================== 00:28:15.489 Total : 17347.40 67.76 14777.33 1831.55 65703.06 00:28:15.489 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:15.489 rmmod nvme_tcp 00:28:15.489 rmmod nvme_fabrics 00:28:15.489 rmmod nvme_keyring 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 2399080 ']' 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 2399080 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2399080 ']' 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2399080 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2399080 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2399080' 00:28:15.489 killing process with pid 2399080 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2399080 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2399080 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:15.489 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:28:15.489 05:04:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:15.489 05:04:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:15.489 05:04:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.489 05:04:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.489 05:04:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:18.838 00:28:18.838 real 0m47.253s 00:28:18.838 user 2m41.323s 00:28:18.838 sys 0m10.667s 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:18.838 ************************************ 00:28:18.838 END TEST nvmf_perf_adq 00:28:18.838 ************************************ 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:18.838 ************************************ 00:28:18.838 START TEST nvmf_shutdown 00:28:18.838 ************************************ 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:18.838 * Looking for test storage... 00:28:18.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1689 -- # lcov --version 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:28:18.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.838 --rc genhtml_branch_coverage=1 00:28:18.838 --rc genhtml_function_coverage=1 00:28:18.838 --rc genhtml_legend=1 00:28:18.838 --rc geninfo_all_blocks=1 00:28:18.838 --rc geninfo_unexecuted_blocks=1 00:28:18.838 00:28:18.838 ' 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:28:18.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.838 --rc genhtml_branch_coverage=1 00:28:18.838 --rc genhtml_function_coverage=1 00:28:18.838 --rc genhtml_legend=1 00:28:18.838 --rc geninfo_all_blocks=1 00:28:18.838 --rc geninfo_unexecuted_blocks=1 00:28:18.838 00:28:18.838 ' 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:28:18.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.838 --rc genhtml_branch_coverage=1 00:28:18.838 --rc genhtml_function_coverage=1 00:28:18.838 --rc genhtml_legend=1 00:28:18.838 --rc geninfo_all_blocks=1 00:28:18.838 --rc geninfo_unexecuted_blocks=1 00:28:18.838 00:28:18.838 ' 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:28:18.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.838 --rc genhtml_branch_coverage=1 00:28:18.838 --rc genhtml_function_coverage=1 00:28:18.838 --rc genhtml_legend=1 00:28:18.838 --rc geninfo_all_blocks=1 00:28:18.838 --rc geninfo_unexecuted_blocks=1 00:28:18.838 00:28:18.838 ' 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.838 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:18.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:18.839 ************************************ 00:28:18.839 START TEST nvmf_shutdown_tc1 00:28:18.839 ************************************ 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:18.839 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:20.745 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:20.745 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:20.745 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:20.745 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:20.745 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:20.746 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:21.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:28:21.006 00:28:21.006 --- 10.0.0.2 ping statistics --- 00:28:21.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.006 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:28:21.006 00:28:21.006 --- 10.0.0.1 ping statistics --- 00:28:21.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.006 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=2402491 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 2402491 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2402491 ']' 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:21.006 05:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:21.006 [2024-10-28 05:04:11.465480] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:28:21.006 [2024-10-28 05:04:11.465586] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.266 [2024-10-28 05:04:11.607134] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:21.266 [2024-10-28 05:04:11.651132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:21.266 [2024-10-28 05:04:11.700500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.266 [2024-10-28 05:04:11.700566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.266 [2024-10-28 05:04:11.700593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.266 [2024-10-28 05:04:11.700606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.266 [2024-10-28 05:04:11.700618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.266 [2024-10-28 05:04:11.702324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.266 [2024-10-28 05:04:11.702455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.266 [2024-10-28 05:04:11.702517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.266 [2024-10-28 05:04:11.702514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:22.201 [2024-10-28 05:04:12.474428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.201 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:22.201 Malloc1 00:28:22.201 [2024-10-28 05:04:12.574459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.201 Malloc2 00:28:22.201 Malloc3 00:28:22.201 Malloc4 00:28:22.201 Malloc5 00:28:22.461 Malloc6 00:28:22.461 Malloc7 00:28:22.461 Malloc8 00:28:22.461 Malloc9 00:28:22.461 Malloc10 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2402678 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2402678 /var/tmp/bdevperf.sock 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2402678 ']' 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:28:22.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:22.461 { 00:28:22.461 "params": { 00:28:22.461 "name": "Nvme$subsystem", 00:28:22.461 "trtype": "$TEST_TRANSPORT", 00:28:22.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.461 "adrfam": "ipv4", 00:28:22.461 "trsvcid": "$NVMF_PORT", 00:28:22.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.461 "hdgst": ${hdgst:-false}, 00:28:22.461 "ddgst": ${ddgst:-false} 00:28:22.461 }, 00:28:22.461 "method": "bdev_nvme_attach_controller" 00:28:22.461 } 00:28:22.461 EOF 00:28:22.461 )") 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:22.461 { 00:28:22.461 "params": { 00:28:22.461 "name": "Nvme$subsystem", 00:28:22.461 "trtype": "$TEST_TRANSPORT", 00:28:22.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.461 "adrfam": "ipv4", 00:28:22.461 "trsvcid": "$NVMF_PORT", 00:28:22.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.461 "hdgst": ${hdgst:-false}, 00:28:22.461 "ddgst": ${ddgst:-false} 00:28:22.461 }, 00:28:22.461 "method": "bdev_nvme_attach_controller" 00:28:22.461 } 00:28:22.461 EOF 00:28:22.461 )") 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:22.461 { 00:28:22.461 "params": { 00:28:22.461 "name": "Nvme$subsystem", 00:28:22.461 "trtype": "$TEST_TRANSPORT", 00:28:22.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.461 "adrfam": "ipv4", 00:28:22.461 "trsvcid": "$NVMF_PORT", 00:28:22.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.461 "hdgst": ${hdgst:-false}, 00:28:22.461 "ddgst": ${ddgst:-false} 00:28:22.461 }, 00:28:22.461 "method": "bdev_nvme_attach_controller" 00:28:22.461 } 00:28:22.461 EOF 00:28:22.461 )") 00:28:22.461 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:22.720 { 00:28:22.720 "params": { 00:28:22.720 "name": "Nvme$subsystem", 00:28:22.720 "trtype": "$TEST_TRANSPORT", 00:28:22.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.720 "adrfam": "ipv4", 00:28:22.720 "trsvcid": "$NVMF_PORT", 00:28:22.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.720 "hdgst": ${hdgst:-false}, 00:28:22.720 "ddgst": ${ddgst:-false} 00:28:22.720 }, 00:28:22.720 "method": "bdev_nvme_attach_controller" 00:28:22.720 } 00:28:22.720 EOF 00:28:22.720 )") 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:22.720 { 00:28:22.720 "params": { 00:28:22.720 "name": "Nvme$subsystem", 00:28:22.720 "trtype": "$TEST_TRANSPORT", 00:28:22.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.720 "adrfam": "ipv4", 00:28:22.720 "trsvcid": "$NVMF_PORT", 00:28:22.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.720 "hdgst": ${hdgst:-false}, 00:28:22.720 "ddgst": ${ddgst:-false} 00:28:22.720 }, 00:28:22.720 "method": "bdev_nvme_attach_controller" 00:28:22.720 } 00:28:22.720 EOF 00:28:22.720 )") 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:22.720 { 00:28:22.720 "params": { 00:28:22.720 "name": "Nvme$subsystem", 00:28:22.720 "trtype": "$TEST_TRANSPORT", 00:28:22.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.720 "adrfam": "ipv4", 00:28:22.720 "trsvcid": "$NVMF_PORT", 00:28:22.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.720 "hdgst": ${hdgst:-false}, 00:28:22.720 "ddgst": ${ddgst:-false} 00:28:22.720 }, 00:28:22.720 "method": "bdev_nvme_attach_controller" 00:28:22.720 } 00:28:22.720 EOF 00:28:22.720 )") 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:22.720 { 00:28:22.720 "params": { 00:28:22.720 "name": "Nvme$subsystem", 00:28:22.720 "trtype": "$TEST_TRANSPORT", 00:28:22.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.720 "adrfam": "ipv4", 00:28:22.720 "trsvcid": "$NVMF_PORT", 00:28:22.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.720 "hdgst": ${hdgst:-false}, 00:28:22.720 "ddgst": ${ddgst:-false} 00:28:22.720 }, 00:28:22.720 "method": "bdev_nvme_attach_controller" 00:28:22.720 } 00:28:22.720 EOF 00:28:22.720 )") 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:22.720 { 00:28:22.720 "params": { 00:28:22.720 "name": "Nvme$subsystem", 00:28:22.720 "trtype": "$TEST_TRANSPORT", 00:28:22.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.720 "adrfam": "ipv4", 00:28:22.720 "trsvcid": "$NVMF_PORT", 00:28:22.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.720 "hdgst": ${hdgst:-false}, 00:28:22.720 "ddgst": ${ddgst:-false} 00:28:22.720 }, 00:28:22.720 "method": "bdev_nvme_attach_controller" 00:28:22.720 } 00:28:22.720 EOF 00:28:22.720 )") 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:22.720 { 00:28:22.720 "params": { 00:28:22.720 "name": "Nvme$subsystem", 00:28:22.720 "trtype": "$TEST_TRANSPORT", 00:28:22.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.720 "adrfam": "ipv4", 00:28:22.720 "trsvcid": "$NVMF_PORT", 00:28:22.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.720 "hdgst": ${hdgst:-false}, 00:28:22.720 "ddgst": ${ddgst:-false} 00:28:22.720 }, 00:28:22.720 "method": "bdev_nvme_attach_controller" 00:28:22.720 } 00:28:22.720 EOF 00:28:22.720 )") 00:28:22.720 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:22.721 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:22.721 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:22.721 { 00:28:22.721 "params": { 00:28:22.721 "name": "Nvme$subsystem", 00:28:22.721 "trtype": "$TEST_TRANSPORT", 00:28:22.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.721 "adrfam": "ipv4", 00:28:22.721 "trsvcid": "$NVMF_PORT", 00:28:22.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.721 "hdgst": ${hdgst:-false}, 00:28:22.721 "ddgst": ${ddgst:-false} 00:28:22.721 }, 00:28:22.721 "method": "bdev_nvme_attach_controller" 00:28:22.721 } 00:28:22.721 EOF 00:28:22.721 )") 00:28:22.721 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:22.721 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:28:22.721 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:28:22.721 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:22.721 "params": { 00:28:22.721 "name": "Nvme1", 00:28:22.721 "trtype": "tcp", 00:28:22.721 "traddr": "10.0.0.2", 00:28:22.721 "adrfam": "ipv4", 00:28:22.721 "trsvcid": "4420", 00:28:22.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:22.721 "hdgst": false, 00:28:22.721 "ddgst": false 00:28:22.721 }, 00:28:22.721 "method": "bdev_nvme_attach_controller" 00:28:22.721 },{ 00:28:22.721 "params": { 00:28:22.721 "name": "Nvme2", 00:28:22.721 "trtype": "tcp", 00:28:22.721 "traddr": "10.0.0.2", 00:28:22.721 "adrfam": "ipv4", 00:28:22.721 "trsvcid": "4420", 00:28:22.721 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:22.721 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:22.721 "hdgst": false, 00:28:22.721 "ddgst": false 00:28:22.721 }, 00:28:22.721 "method": "bdev_nvme_attach_controller" 00:28:22.721 },{ 00:28:22.721 "params": { 00:28:22.721 "name": "Nvme3", 00:28:22.721 "trtype": "tcp", 00:28:22.721 "traddr": "10.0.0.2", 00:28:22.721 "adrfam": "ipv4", 00:28:22.721 "trsvcid": "4420", 00:28:22.721 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:22.721 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:22.721 "hdgst": false, 00:28:22.721 "ddgst": false 00:28:22.721 }, 00:28:22.721 "method": "bdev_nvme_attach_controller" 00:28:22.721 },{ 00:28:22.721 "params": { 00:28:22.721 "name": "Nvme4", 00:28:22.721 "trtype": "tcp", 00:28:22.721 "traddr": "10.0.0.2", 00:28:22.721 "adrfam": "ipv4", 00:28:22.721 "trsvcid": "4420", 00:28:22.721 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:22.721 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:22.721 "hdgst": false, 00:28:22.721 "ddgst": false 00:28:22.721 }, 00:28:22.721 "method": "bdev_nvme_attach_controller" 00:28:22.721 },{ 00:28:22.721 "params": { 00:28:22.721 "name": "Nvme5", 00:28:22.721 "trtype": "tcp", 00:28:22.721 "traddr": "10.0.0.2", 00:28:22.721 "adrfam": "ipv4", 00:28:22.721 "trsvcid": "4420", 00:28:22.721 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:22.721 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:22.721 "hdgst": false, 00:28:22.721 "ddgst": false 00:28:22.721 }, 00:28:22.721 "method": "bdev_nvme_attach_controller" 00:28:22.721 },{ 00:28:22.721 "params": { 00:28:22.721 "name": "Nvme6", 00:28:22.721 "trtype": "tcp", 00:28:22.721 "traddr": "10.0.0.2", 00:28:22.721 "adrfam": "ipv4", 00:28:22.721 "trsvcid": "4420", 00:28:22.721 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:22.721 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:22.721 "hdgst": false, 00:28:22.721 "ddgst": false 00:28:22.721 }, 00:28:22.721 "method": "bdev_nvme_attach_controller" 00:28:22.721 },{ 00:28:22.721 "params": { 00:28:22.721 "name": "Nvme7", 00:28:22.721 "trtype": "tcp", 00:28:22.721 "traddr": "10.0.0.2", 00:28:22.721 "adrfam": "ipv4", 00:28:22.721 "trsvcid": "4420", 00:28:22.721 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:22.721 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:22.721 "hdgst": false, 00:28:22.721 "ddgst": false 00:28:22.721 }, 00:28:22.721 "method": "bdev_nvme_attach_controller" 00:28:22.721 },{ 00:28:22.721 "params": { 00:28:22.721 "name": "Nvme8", 00:28:22.721 "trtype": "tcp", 00:28:22.721 "traddr": "10.0.0.2", 00:28:22.721 "adrfam": "ipv4", 00:28:22.721 "trsvcid": "4420", 00:28:22.721 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:22.721 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:22.721 "hdgst": false, 00:28:22.721 "ddgst": false 00:28:22.721 }, 00:28:22.721 "method": "bdev_nvme_attach_controller" 00:28:22.721 },{ 00:28:22.721 "params": { 00:28:22.721 "name": "Nvme9", 00:28:22.721 "trtype": "tcp", 00:28:22.721 "traddr": "10.0.0.2", 00:28:22.721 "adrfam": "ipv4", 00:28:22.721 "trsvcid": "4420", 00:28:22.721 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:22.721 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:22.721 "hdgst": false, 00:28:22.721 "ddgst": false 00:28:22.721 }, 00:28:22.721 "method": "bdev_nvme_attach_controller" 00:28:22.721 },{ 00:28:22.721 "params": { 00:28:22.721 "name": "Nvme10", 00:28:22.721 "trtype": "tcp", 00:28:22.721 "traddr": "10.0.0.2", 00:28:22.721 "adrfam": "ipv4", 00:28:22.721 "trsvcid": "4420", 00:28:22.721 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:22.721 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:22.721 "hdgst": false, 00:28:22.721 "ddgst": false 00:28:22.721 }, 00:28:22.721 "method": "bdev_nvme_attach_controller" 00:28:22.721 }' 00:28:22.721 [2024-10-28 05:04:13.090087] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:28:22.721 [2024-10-28 05:04:13.090164] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:22.721 [2024-10-28 05:04:13.225445] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:22.721 [2024-10-28 05:04:13.264315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.721 [2024-10-28 05:04:13.311759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.096 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.096 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:24.096 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:24.096 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.096 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:24.096 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.096 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2402678 00:28:24.096 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:24.096 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:25.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2402678 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2402491 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:25.469 { 00:28:25.469 "params": { 00:28:25.469 "name": "Nvme$subsystem", 00:28:25.469 "trtype": "$TEST_TRANSPORT", 00:28:25.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.469 "adrfam": "ipv4", 00:28:25.469 "trsvcid": "$NVMF_PORT", 00:28:25.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.469 "hdgst": ${hdgst:-false}, 00:28:25.469 "ddgst": ${ddgst:-false} 00:28:25.469 }, 00:28:25.469 "method": "bdev_nvme_attach_controller" 00:28:25.469 } 00:28:25.469 EOF 00:28:25.469 )") 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:25.469 { 00:28:25.469 "params": { 00:28:25.469 "name": "Nvme$subsystem", 00:28:25.469 "trtype": "$TEST_TRANSPORT", 00:28:25.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.469 "adrfam": "ipv4", 00:28:25.469 "trsvcid": "$NVMF_PORT", 00:28:25.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.469 "hdgst": ${hdgst:-false}, 00:28:25.469 "ddgst": ${ddgst:-false} 00:28:25.469 }, 00:28:25.469 "method": "bdev_nvme_attach_controller" 00:28:25.469 } 00:28:25.469 EOF 00:28:25.469 )") 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:25.469 { 00:28:25.469 "params": { 00:28:25.469 "name": "Nvme$subsystem", 00:28:25.469 "trtype": "$TEST_TRANSPORT", 00:28:25.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.469 "adrfam": "ipv4", 00:28:25.469 "trsvcid": "$NVMF_PORT", 00:28:25.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.469 "hdgst": ${hdgst:-false}, 00:28:25.469 "ddgst": ${ddgst:-false} 00:28:25.469 }, 00:28:25.469 "method": "bdev_nvme_attach_controller" 00:28:25.469 } 00:28:25.469 EOF 00:28:25.469 )") 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:25.469 { 00:28:25.469 "params": { 00:28:25.469 "name": "Nvme$subsystem", 00:28:25.469 "trtype": "$TEST_TRANSPORT", 00:28:25.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.469 "adrfam": "ipv4", 00:28:25.469 "trsvcid": "$NVMF_PORT", 00:28:25.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.469 "hdgst": ${hdgst:-false}, 00:28:25.469 "ddgst": ${ddgst:-false} 00:28:25.469 }, 00:28:25.469 "method": "bdev_nvme_attach_controller" 00:28:25.469 } 00:28:25.469 EOF 00:28:25.469 )") 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:25.469 { 00:28:25.469 "params": { 00:28:25.469 "name": "Nvme$subsystem", 00:28:25.469 "trtype": "$TEST_TRANSPORT", 00:28:25.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.469 "adrfam": "ipv4", 00:28:25.469 "trsvcid": "$NVMF_PORT", 00:28:25.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.469 "hdgst": ${hdgst:-false}, 00:28:25.469 "ddgst": ${ddgst:-false} 00:28:25.469 }, 00:28:25.469 "method": "bdev_nvme_attach_controller" 00:28:25.469 } 00:28:25.469 EOF 00:28:25.469 )") 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:25.469 { 00:28:25.469 "params": { 00:28:25.469 "name": "Nvme$subsystem", 00:28:25.469 "trtype": "$TEST_TRANSPORT", 00:28:25.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.469 "adrfam": "ipv4", 00:28:25.469 "trsvcid": "$NVMF_PORT", 00:28:25.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.469 "hdgst": ${hdgst:-false}, 00:28:25.469 "ddgst": ${ddgst:-false} 00:28:25.469 }, 00:28:25.469 "method": "bdev_nvme_attach_controller" 00:28:25.469 } 00:28:25.469 EOF 00:28:25.469 )") 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:25.469 { 00:28:25.469 "params": { 00:28:25.469 "name": "Nvme$subsystem", 00:28:25.469 "trtype": "$TEST_TRANSPORT", 00:28:25.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.469 "adrfam": "ipv4", 00:28:25.469 "trsvcid": "$NVMF_PORT", 00:28:25.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.469 "hdgst": ${hdgst:-false}, 00:28:25.469 "ddgst": ${ddgst:-false} 00:28:25.469 }, 00:28:25.469 "method": "bdev_nvme_attach_controller" 00:28:25.469 } 00:28:25.469 EOF 00:28:25.469 )") 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:25.469 { 00:28:25.469 "params": { 00:28:25.469 "name": "Nvme$subsystem", 00:28:25.469 "trtype": "$TEST_TRANSPORT", 00:28:25.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.469 "adrfam": "ipv4", 00:28:25.469 "trsvcid": "$NVMF_PORT", 00:28:25.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.469 "hdgst": ${hdgst:-false}, 00:28:25.469 "ddgst": ${ddgst:-false} 00:28:25.469 }, 00:28:25.469 "method": "bdev_nvme_attach_controller" 00:28:25.469 } 00:28:25.469 EOF 00:28:25.469 )") 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:25.469 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:25.469 { 00:28:25.469 "params": { 00:28:25.470 "name": "Nvme$subsystem", 00:28:25.470 "trtype": "$TEST_TRANSPORT", 00:28:25.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.470 "adrfam": "ipv4", 00:28:25.470 "trsvcid": "$NVMF_PORT", 00:28:25.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.470 "hdgst": ${hdgst:-false}, 00:28:25.470 "ddgst": ${ddgst:-false} 00:28:25.470 }, 00:28:25.470 "method": "bdev_nvme_attach_controller" 00:28:25.470 } 00:28:25.470 EOF 00:28:25.470 )") 00:28:25.470 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:25.470 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:25.470 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:25.470 { 00:28:25.470 "params": { 00:28:25.470 "name": "Nvme$subsystem", 00:28:25.470 "trtype": "$TEST_TRANSPORT", 00:28:25.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.470 "adrfam": "ipv4", 00:28:25.470 "trsvcid": "$NVMF_PORT", 00:28:25.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.470 "hdgst": ${hdgst:-false}, 00:28:25.470 "ddgst": ${ddgst:-false} 00:28:25.470 }, 00:28:25.470 "method": "bdev_nvme_attach_controller" 00:28:25.470 } 00:28:25.470 EOF 00:28:25.470 )") 00:28:25.470 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:25.470 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:28:25.470 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:28:25.470 05:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:25.470 "params": { 00:28:25.470 "name": "Nvme1", 00:28:25.470 "trtype": "tcp", 00:28:25.470 "traddr": "10.0.0.2", 00:28:25.470 "adrfam": "ipv4", 00:28:25.470 "trsvcid": "4420", 00:28:25.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:25.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:25.470 "hdgst": false, 00:28:25.470 "ddgst": false 00:28:25.470 }, 00:28:25.470 "method": "bdev_nvme_attach_controller" 00:28:25.470 },{ 00:28:25.470 "params": { 00:28:25.470 "name": "Nvme2", 00:28:25.470 "trtype": "tcp", 00:28:25.470 "traddr": "10.0.0.2", 00:28:25.470 "adrfam": "ipv4", 00:28:25.470 "trsvcid": "4420", 00:28:25.470 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:25.470 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:25.470 "hdgst": false, 00:28:25.470 "ddgst": false 00:28:25.470 }, 00:28:25.470 "method": "bdev_nvme_attach_controller" 00:28:25.470 },{ 00:28:25.470 "params": { 00:28:25.470 "name": "Nvme3", 00:28:25.470 "trtype": "tcp", 00:28:25.470 "traddr": "10.0.0.2", 00:28:25.470 "adrfam": "ipv4", 00:28:25.470 "trsvcid": "4420", 00:28:25.470 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:25.470 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:25.470 "hdgst": false, 00:28:25.470 "ddgst": false 00:28:25.470 }, 00:28:25.470 "method": "bdev_nvme_attach_controller" 00:28:25.470 },{ 00:28:25.470 "params": { 00:28:25.470 "name": "Nvme4", 00:28:25.470 "trtype": "tcp", 00:28:25.470 "traddr": "10.0.0.2", 00:28:25.470 "adrfam": "ipv4", 00:28:25.470 "trsvcid": "4420", 00:28:25.470 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:25.470 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:25.470 "hdgst": false, 00:28:25.470 "ddgst": false 00:28:25.470 }, 00:28:25.470 "method": "bdev_nvme_attach_controller" 00:28:25.470 },{ 00:28:25.470 "params": { 00:28:25.470 "name": "Nvme5", 00:28:25.470 "trtype": "tcp", 00:28:25.470 "traddr": "10.0.0.2", 00:28:25.470 "adrfam": "ipv4", 00:28:25.470 "trsvcid": "4420", 00:28:25.470 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:25.470 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:25.470 "hdgst": false, 00:28:25.470 "ddgst": false 00:28:25.470 }, 00:28:25.470 "method": "bdev_nvme_attach_controller" 00:28:25.470 },{ 00:28:25.470 "params": { 00:28:25.470 "name": "Nvme6", 00:28:25.470 "trtype": "tcp", 00:28:25.470 "traddr": "10.0.0.2", 00:28:25.470 "adrfam": "ipv4", 00:28:25.470 "trsvcid": "4420", 00:28:25.470 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:25.470 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:25.470 "hdgst": false, 00:28:25.470 "ddgst": false 00:28:25.470 }, 00:28:25.470 "method": "bdev_nvme_attach_controller" 00:28:25.470 },{ 00:28:25.470 "params": { 00:28:25.470 "name": "Nvme7", 00:28:25.470 "trtype": "tcp", 00:28:25.470 "traddr": "10.0.0.2", 00:28:25.470 "adrfam": "ipv4", 00:28:25.470 "trsvcid": "4420", 00:28:25.470 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:25.470 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:25.470 "hdgst": false, 00:28:25.470 "ddgst": false 00:28:25.470 }, 00:28:25.470 "method": "bdev_nvme_attach_controller" 00:28:25.470 },{ 00:28:25.470 "params": { 00:28:25.470 "name": "Nvme8", 00:28:25.470 "trtype": "tcp", 00:28:25.470 "traddr": "10.0.0.2", 00:28:25.470 "adrfam": "ipv4", 00:28:25.470 "trsvcid": "4420", 00:28:25.470 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:25.470 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:25.470 "hdgst": false, 00:28:25.470 "ddgst": false 00:28:25.470 }, 00:28:25.470 "method": "bdev_nvme_attach_controller" 00:28:25.470 },{ 00:28:25.470 "params": { 00:28:25.470 "name": "Nvme9", 00:28:25.470 "trtype": "tcp", 00:28:25.470 "traddr": "10.0.0.2", 00:28:25.470 "adrfam": "ipv4", 00:28:25.470 "trsvcid": "4420", 00:28:25.470 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:25.470 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:25.470 "hdgst": false, 00:28:25.470 "ddgst": false 00:28:25.470 }, 00:28:25.470 "method": "bdev_nvme_attach_controller" 00:28:25.470 },{ 00:28:25.470 "params": { 00:28:25.470 "name": "Nvme10", 00:28:25.470 "trtype": "tcp", 00:28:25.470 "traddr": "10.0.0.2", 00:28:25.470 "adrfam": "ipv4", 00:28:25.470 "trsvcid": "4420", 00:28:25.470 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:25.470 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:25.470 "hdgst": false, 00:28:25.470 "ddgst": false 00:28:25.470 }, 00:28:25.470 "method": "bdev_nvme_attach_controller" 00:28:25.470 }' 00:28:25.470 [2024-10-28 05:04:15.720757] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:28:25.470 [2024-10-28 05:04:15.720839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2403079 ] 00:28:25.470 [2024-10-28 05:04:15.856555] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:25.470 [2024-10-28 05:04:15.894748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.470 [2024-10-28 05:04:15.943899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.843 Running I/O for 1 seconds... 00:28:27.777 1672.00 IOPS, 104.50 MiB/s 00:28:27.777 Latency(us) 00:28:27.777 [2024-10-28T04:04:18.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.777 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.777 Verification LBA range: start 0x0 length 0x400 00:28:27.777 Nvme1n1 : 1.06 180.67 11.29 0.00 0.00 350744.68 22092.70 288081.02 00:28:27.777 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.777 Verification LBA range: start 0x0 length 0x400 00:28:27.777 Nvme2n1 : 1.15 226.79 14.17 0.00 0.00 273388.04 5231.20 267837.49 00:28:27.777 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.777 Verification LBA range: start 0x0 length 0x400 00:28:27.777 Nvme3n1 : 1.15 222.01 13.88 0.00 0.00 276472.59 18978.31 274066.27 00:28:27.777 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.777 Verification LBA range: start 0x0 length 0x400 00:28:27.777 Nvme4n1 : 1.12 228.68 14.29 0.00 0.00 262266.04 18394.36 264723.10 00:28:27.777 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.777 Verification LBA range: start 0x0 length 0x400 00:28:27.777 Nvme5n1 : 1.14 228.05 14.25 0.00 0.00 258093.84 6058.46 263165.91 00:28:27.777 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.777 Verification LBA range: start 0x0 length 0x400 00:28:27.777 Nvme6n1 : 1.16 220.57 13.79 0.00 0.00 264797.05 20438.18 292752.61 00:28:27.777 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.778 Verification LBA range: start 0x0 length 0x400 00:28:27.778 Nvme7n1 : 1.15 236.67 14.79 0.00 0.00 238313.01 8564.57 266280.30 00:28:27.778 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.778 Verification LBA range: start 0x0 length 0x400 00:28:27.778 Nvme8n1 : 1.15 226.30 14.14 0.00 0.00 247896.32 4817.57 255379.94 00:28:27.778 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.778 Verification LBA range: start 0x0 length 0x400 00:28:27.778 Nvme9n1 : 1.17 219.01 13.69 0.00 0.00 253461.57 22190.02 302095.78 00:28:27.778 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:27.778 Verification LBA range: start 0x0 length 0x400 00:28:27.778 Nvme10n1 : 1.17 219.72 13.73 0.00 0.00 248285.12 21216.78 277180.66 00:28:27.778 [2024-10-28T04:04:18.374Z] =================================================================================================================== 00:28:27.778 [2024-10-28T04:04:18.374Z] Total : 2208.47 138.03 0.00 0.00 265048.13 4817.57 302095.78 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:28.036 rmmod nvme_tcp 00:28:28.036 rmmod nvme_fabrics 00:28:28.036 rmmod nvme_keyring 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 2402491 ']' 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 2402491 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2402491 ']' 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2402491 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:28.036 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2402491 00:28:28.294 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:28.294 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:28.294 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2402491' 00:28:28.294 killing process with pid 2402491 00:28:28.294 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2402491 00:28:28.294 05:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2402491 00:28:28.552 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:28.552 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:28.552 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:28.552 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:28.552 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:28:28.552 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:28:28.552 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:28.552 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:28.552 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:28.552 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.552 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:28.552 05:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:31.086 00:28:31.086 real 0m11.854s 00:28:31.086 user 0m33.805s 00:28:31.086 sys 0m3.084s 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:31.086 ************************************ 00:28:31.086 END TEST nvmf_shutdown_tc1 00:28:31.086 ************************************ 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:31.086 ************************************ 00:28:31.086 START TEST nvmf_shutdown_tc2 00:28:31.086 ************************************ 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:31.086 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:31.087 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:31.087 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:31.087 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:31.087 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.087 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:31.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:31.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:28:31.088 00:28:31.088 --- 10.0.0.2 ping statistics --- 00:28:31.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.088 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:31.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:31.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:28:31.088 00:28:31.088 --- 10.0.0.1 ping statistics --- 00:28:31.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.088 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2403825 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2403825 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2403825 ']' 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:31.088 05:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:31.088 [2024-10-28 05:04:21.407190] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:28:31.088 [2024-10-28 05:04:21.407282] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.088 [2024-10-28 05:04:21.549580] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:31.088 [2024-10-28 05:04:21.585125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:31.088 [2024-10-28 05:04:21.632958] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.088 [2024-10-28 05:04:21.633016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.088 [2024-10-28 05:04:21.633041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.088 [2024-10-28 05:04:21.633055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.088 [2024-10-28 05:04:21.633066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.088 [2024-10-28 05:04:21.634725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:31.088 [2024-10-28 05:04:21.634751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:31.088 [2024-10-28 05:04:21.634802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:31.088 [2024-10-28 05:04:21.634805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.023 [2024-10-28 05:04:22.477763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.023 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:32.024 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:32.024 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.024 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.024 Malloc1 00:28:32.024 [2024-10-28 05:04:22.576130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.024 Malloc2 00:28:32.282 Malloc3 00:28:32.282 Malloc4 00:28:32.282 Malloc5 00:28:32.282 Malloc6 00:28:32.282 Malloc7 00:28:32.541 Malloc8 00:28:32.541 Malloc9 00:28:32.541 Malloc10 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2404008 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2404008 /var/tmp/bdevperf.sock 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2404008 ']' 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:32.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:32.541 { 00:28:32.541 "params": { 00:28:32.541 "name": "Nvme$subsystem", 00:28:32.541 "trtype": "$TEST_TRANSPORT", 00:28:32.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.541 "adrfam": "ipv4", 00:28:32.541 "trsvcid": "$NVMF_PORT", 00:28:32.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.541 "hdgst": ${hdgst:-false}, 00:28:32.541 "ddgst": ${ddgst:-false} 00:28:32.541 }, 00:28:32.541 "method": "bdev_nvme_attach_controller" 00:28:32.541 } 00:28:32.541 EOF 00:28:32.541 )") 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:32.541 { 00:28:32.541 "params": { 00:28:32.541 "name": "Nvme$subsystem", 00:28:32.541 "trtype": "$TEST_TRANSPORT", 00:28:32.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.541 "adrfam": "ipv4", 00:28:32.541 "trsvcid": "$NVMF_PORT", 00:28:32.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.541 "hdgst": ${hdgst:-false}, 00:28:32.541 "ddgst": ${ddgst:-false} 00:28:32.541 }, 00:28:32.541 "method": "bdev_nvme_attach_controller" 00:28:32.541 } 00:28:32.541 EOF 00:28:32.541 )") 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:32.541 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:32.541 { 00:28:32.541 "params": { 00:28:32.541 "name": "Nvme$subsystem", 00:28:32.541 "trtype": "$TEST_TRANSPORT", 00:28:32.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.541 "adrfam": "ipv4", 00:28:32.541 "trsvcid": "$NVMF_PORT", 00:28:32.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.542 "hdgst": ${hdgst:-false}, 00:28:32.542 "ddgst": ${ddgst:-false} 00:28:32.542 }, 00:28:32.542 "method": "bdev_nvme_attach_controller" 00:28:32.542 } 00:28:32.542 EOF 00:28:32.542 )") 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:32.542 { 00:28:32.542 "params": { 00:28:32.542 "name": "Nvme$subsystem", 00:28:32.542 "trtype": "$TEST_TRANSPORT", 00:28:32.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.542 "adrfam": "ipv4", 00:28:32.542 "trsvcid": "$NVMF_PORT", 00:28:32.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.542 "hdgst": ${hdgst:-false}, 00:28:32.542 "ddgst": ${ddgst:-false} 00:28:32.542 }, 00:28:32.542 "method": "bdev_nvme_attach_controller" 00:28:32.542 } 00:28:32.542 EOF 00:28:32.542 )") 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:32.542 { 00:28:32.542 "params": { 00:28:32.542 "name": "Nvme$subsystem", 00:28:32.542 "trtype": "$TEST_TRANSPORT", 00:28:32.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.542 "adrfam": "ipv4", 00:28:32.542 "trsvcid": "$NVMF_PORT", 00:28:32.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.542 "hdgst": ${hdgst:-false}, 00:28:32.542 "ddgst": ${ddgst:-false} 00:28:32.542 }, 00:28:32.542 "method": "bdev_nvme_attach_controller" 00:28:32.542 } 00:28:32.542 EOF 00:28:32.542 )") 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:32.542 { 00:28:32.542 "params": { 00:28:32.542 "name": "Nvme$subsystem", 00:28:32.542 "trtype": "$TEST_TRANSPORT", 00:28:32.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.542 "adrfam": "ipv4", 00:28:32.542 "trsvcid": "$NVMF_PORT", 00:28:32.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.542 "hdgst": ${hdgst:-false}, 00:28:32.542 "ddgst": ${ddgst:-false} 00:28:32.542 }, 00:28:32.542 "method": "bdev_nvme_attach_controller" 00:28:32.542 } 00:28:32.542 EOF 00:28:32.542 )") 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:32.542 { 00:28:32.542 "params": { 00:28:32.542 "name": "Nvme$subsystem", 00:28:32.542 "trtype": "$TEST_TRANSPORT", 00:28:32.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.542 "adrfam": "ipv4", 00:28:32.542 "trsvcid": "$NVMF_PORT", 00:28:32.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.542 "hdgst": ${hdgst:-false}, 00:28:32.542 "ddgst": ${ddgst:-false} 00:28:32.542 }, 00:28:32.542 "method": "bdev_nvme_attach_controller" 00:28:32.542 } 00:28:32.542 EOF 00:28:32.542 )") 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:32.542 { 00:28:32.542 "params": { 00:28:32.542 "name": "Nvme$subsystem", 00:28:32.542 "trtype": "$TEST_TRANSPORT", 00:28:32.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.542 "adrfam": "ipv4", 00:28:32.542 "trsvcid": "$NVMF_PORT", 00:28:32.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.542 "hdgst": ${hdgst:-false}, 00:28:32.542 "ddgst": ${ddgst:-false} 00:28:32.542 }, 00:28:32.542 "method": "bdev_nvme_attach_controller" 00:28:32.542 } 00:28:32.542 EOF 00:28:32.542 )") 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:32.542 { 00:28:32.542 "params": { 00:28:32.542 "name": "Nvme$subsystem", 00:28:32.542 "trtype": "$TEST_TRANSPORT", 00:28:32.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.542 "adrfam": "ipv4", 00:28:32.542 "trsvcid": "$NVMF_PORT", 00:28:32.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.542 "hdgst": ${hdgst:-false}, 00:28:32.542 "ddgst": ${ddgst:-false} 00:28:32.542 }, 00:28:32.542 "method": "bdev_nvme_attach_controller" 00:28:32.542 } 00:28:32.542 EOF 00:28:32.542 )") 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:32.542 { 00:28:32.542 "params": { 00:28:32.542 "name": "Nvme$subsystem", 00:28:32.542 "trtype": "$TEST_TRANSPORT", 00:28:32.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.542 "adrfam": "ipv4", 00:28:32.542 "trsvcid": "$NVMF_PORT", 00:28:32.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.542 "hdgst": ${hdgst:-false}, 00:28:32.542 "ddgst": ${ddgst:-false} 00:28:32.542 }, 00:28:32.542 "method": "bdev_nvme_attach_controller" 00:28:32.542 } 00:28:32.542 EOF 00:28:32.542 )") 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:28:32.542 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:32.542 "params": { 00:28:32.542 "name": "Nvme1", 00:28:32.542 "trtype": "tcp", 00:28:32.543 "traddr": "10.0.0.2", 00:28:32.543 "adrfam": "ipv4", 00:28:32.543 "trsvcid": "4420", 00:28:32.543 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:32.543 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:32.543 "hdgst": false, 00:28:32.543 "ddgst": false 00:28:32.543 }, 00:28:32.543 "method": "bdev_nvme_attach_controller" 00:28:32.543 },{ 00:28:32.543 "params": { 00:28:32.543 "name": "Nvme2", 00:28:32.543 "trtype": "tcp", 00:28:32.543 "traddr": "10.0.0.2", 00:28:32.543 "adrfam": "ipv4", 00:28:32.543 "trsvcid": "4420", 00:28:32.543 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:32.543 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:32.543 "hdgst": false, 00:28:32.543 "ddgst": false 00:28:32.543 }, 00:28:32.543 "method": "bdev_nvme_attach_controller" 00:28:32.543 },{ 00:28:32.543 "params": { 00:28:32.543 "name": "Nvme3", 00:28:32.543 "trtype": "tcp", 00:28:32.543 "traddr": "10.0.0.2", 00:28:32.543 "adrfam": "ipv4", 00:28:32.543 "trsvcid": "4420", 00:28:32.543 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:32.543 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:32.543 "hdgst": false, 00:28:32.543 "ddgst": false 00:28:32.543 }, 00:28:32.543 "method": "bdev_nvme_attach_controller" 00:28:32.543 },{ 00:28:32.543 "params": { 00:28:32.543 "name": "Nvme4", 00:28:32.543 "trtype": "tcp", 00:28:32.543 "traddr": "10.0.0.2", 00:28:32.543 "adrfam": "ipv4", 00:28:32.543 "trsvcid": "4420", 00:28:32.543 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:32.543 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:32.543 "hdgst": false, 00:28:32.543 "ddgst": false 00:28:32.543 }, 00:28:32.543 "method": "bdev_nvme_attach_controller" 00:28:32.543 },{ 00:28:32.543 "params": { 00:28:32.543 "name": "Nvme5", 00:28:32.543 "trtype": "tcp", 00:28:32.543 "traddr": "10.0.0.2", 00:28:32.543 "adrfam": "ipv4", 00:28:32.543 "trsvcid": "4420", 00:28:32.543 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:32.543 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:32.543 "hdgst": false, 00:28:32.543 "ddgst": false 00:28:32.543 }, 00:28:32.543 "method": "bdev_nvme_attach_controller" 00:28:32.543 },{ 00:28:32.543 "params": { 00:28:32.543 "name": "Nvme6", 00:28:32.543 "trtype": "tcp", 00:28:32.543 "traddr": "10.0.0.2", 00:28:32.543 "adrfam": "ipv4", 00:28:32.543 "trsvcid": "4420", 00:28:32.543 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:32.543 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:32.543 "hdgst": false, 00:28:32.543 "ddgst": false 00:28:32.543 }, 00:28:32.543 "method": "bdev_nvme_attach_controller" 00:28:32.543 },{ 00:28:32.543 "params": { 00:28:32.543 "name": "Nvme7", 00:28:32.543 "trtype": "tcp", 00:28:32.543 "traddr": "10.0.0.2", 00:28:32.543 "adrfam": "ipv4", 00:28:32.543 "trsvcid": "4420", 00:28:32.543 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:32.543 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:32.543 "hdgst": false, 00:28:32.543 "ddgst": false 00:28:32.543 }, 00:28:32.543 "method": "bdev_nvme_attach_controller" 00:28:32.543 },{ 00:28:32.543 "params": { 00:28:32.543 "name": "Nvme8", 00:28:32.543 "trtype": "tcp", 00:28:32.543 "traddr": "10.0.0.2", 00:28:32.543 "adrfam": "ipv4", 00:28:32.543 "trsvcid": "4420", 00:28:32.543 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:32.543 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:32.543 "hdgst": false, 00:28:32.543 "ddgst": false 00:28:32.543 }, 00:28:32.543 "method": "bdev_nvme_attach_controller" 00:28:32.543 },{ 00:28:32.543 "params": { 00:28:32.543 "name": "Nvme9", 00:28:32.543 "trtype": "tcp", 00:28:32.543 "traddr": "10.0.0.2", 00:28:32.543 "adrfam": "ipv4", 00:28:32.543 "trsvcid": "4420", 00:28:32.543 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:32.543 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:32.543 "hdgst": false, 00:28:32.543 "ddgst": false 00:28:32.543 }, 00:28:32.543 "method": "bdev_nvme_attach_controller" 00:28:32.543 },{ 00:28:32.543 "params": { 00:28:32.543 "name": "Nvme10", 00:28:32.543 "trtype": "tcp", 00:28:32.543 "traddr": "10.0.0.2", 00:28:32.543 "adrfam": "ipv4", 00:28:32.543 "trsvcid": "4420", 00:28:32.543 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:32.543 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:32.543 "hdgst": false, 00:28:32.543 "ddgst": false 00:28:32.543 }, 00:28:32.543 "method": "bdev_nvme_attach_controller" 00:28:32.543 }' 00:28:32.543 [2024-10-28 05:04:23.089273] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:28:32.543 [2024-10-28 05:04:23.089367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2404008 ] 00:28:32.802 [2024-10-28 05:04:23.225714] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:32.802 [2024-10-28 05:04:23.263912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.802 [2024-10-28 05:04:23.310866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.699 Running I/O for 10 seconds... 00:28:34.699 05:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:34.699 05:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:34.699 05:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:34.699 05:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.700 05:04:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.700 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.700 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:34.700 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:34.700 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:34.700 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:34.700 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:34.700 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:34.700 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:34.700 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:34.700 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:34.700 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.700 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.700 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.700 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:34.700 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:34.700 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:34.957 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:34.958 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:34.958 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:34.958 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:34.958 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.958 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.958 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.958 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:34.958 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:34.958 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2404008 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2404008 ']' 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2404008 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:35.216 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2404008 00:28:35.474 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:35.474 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:35.474 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2404008' 00:28:35.474 killing process with pid 2404008 00:28:35.474 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2404008 00:28:35.474 05:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2404008 00:28:35.474 Received shutdown signal, test time was about 0.929561 seconds 00:28:35.474 00:28:35.474 Latency(us) 00:28:35.474 [2024-10-28T04:04:26.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.474 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.474 Verification LBA range: start 0x0 length 0x400 00:28:35.474 Nvme1n1 : 0.91 211.39 13.21 0.00 0.00 299241.93 23357.92 252265.55 00:28:35.474 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.474 Verification LBA range: start 0x0 length 0x400 00:28:35.474 Nvme2n1 : 0.91 280.32 17.52 0.00 0.00 220496.45 18880.99 246036.77 00:28:35.474 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.474 Verification LBA range: start 0x0 length 0x400 00:28:35.474 Nvme3n1 : 0.92 277.52 17.34 0.00 0.00 218744.04 20632.83 241365.18 00:28:35.474 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.474 Verification LBA range: start 0x0 length 0x400 00:28:35.474 Nvme4n1 : 0.93 275.65 17.23 0.00 0.00 215302.51 18102.39 255379.94 00:28:35.474 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.474 Verification LBA range: start 0x0 length 0x400 00:28:35.474 Nvme5n1 : 0.92 208.98 13.06 0.00 0.00 278158.47 27056.26 269394.69 00:28:35.474 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.474 Verification LBA range: start 0x0 length 0x400 00:28:35.474 Nvme6n1 : 0.89 222.11 13.88 0.00 0.00 252029.83 5839.48 232022.01 00:28:35.474 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.474 Verification LBA range: start 0x0 length 0x400 00:28:35.474 Nvme7n1 : 0.88 217.24 13.58 0.00 0.00 254068.33 21898.05 241365.18 00:28:35.474 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.474 Verification LBA range: start 0x0 length 0x400 00:28:35.474 Nvme8n1 : 0.89 214.80 13.42 0.00 0.00 251689.96 19270.28 256937.13 00:28:35.474 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.474 Verification LBA range: start 0x0 length 0x400 00:28:35.474 Nvme9n1 : 0.93 206.95 12.93 0.00 0.00 256705.48 22968.62 306767.36 00:28:35.474 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.474 Verification LBA range: start 0x0 length 0x400 00:28:35.474 Nvme10n1 : 0.90 212.53 13.28 0.00 0.00 243215.62 20243.53 253822.74 00:28:35.474 [2024-10-28T04:04:26.070Z] =================================================================================================================== 00:28:35.474 [2024-10-28T04:04:26.070Z] Total : 2327.50 145.47 0.00 0.00 246180.54 5839.48 306767.36 00:28:35.732 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2403825 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:36.665 rmmod nvme_tcp 00:28:36.665 rmmod nvme_fabrics 00:28:36.665 rmmod nvme_keyring 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 2403825 ']' 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 2403825 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2403825 ']' 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2403825 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2403825 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2403825' 00:28:36.665 killing process with pid 2403825 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2403825 00:28:36.665 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2403825 00:28:37.231 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:37.231 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:37.231 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:37.231 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:37.231 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:28:37.231 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:37.231 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:28:37.231 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:37.231 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:37.231 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.231 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.231 05:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.143 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:39.143 00:28:39.143 real 0m8.495s 00:28:39.143 user 0m26.773s 00:28:39.143 sys 0m1.506s 00:28:39.143 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:39.143 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.143 ************************************ 00:28:39.143 END TEST nvmf_shutdown_tc2 00:28:39.143 ************************************ 00:28:39.143 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:39.143 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:39.143 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:39.143 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:39.407 ************************************ 00:28:39.407 START TEST nvmf_shutdown_tc3 00:28:39.407 ************************************ 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:39.407 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:39.408 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:39.408 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:39.408 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:39.408 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:39.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:28:39.408 00:28:39.408 --- 10.0.0.2 ping statistics --- 00:28:39.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.408 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:28:39.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:28:39.408 00:28:39.408 --- 10.0.0.1 ping statistics --- 00:28:39.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.409 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:28:39.409 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.409 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:28:39.409 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:39.409 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.409 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:39.409 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:39.409 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.409 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:39.409 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:39.409 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:39.409 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:39.409 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:39.409 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:39.666 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=2404903 00:28:39.666 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:39.666 05:04:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 2404903 00:28:39.666 05:04:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2404903 ']' 00:28:39.666 05:04:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.666 05:04:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:39.666 05:04:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.666 05:04:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:39.666 05:04:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:39.666 [2024-10-28 05:04:30.059581] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:28:39.666 [2024-10-28 05:04:30.059735] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.666 [2024-10-28 05:04:30.201972] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:39.666 [2024-10-28 05:04:30.244594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:39.924 [2024-10-28 05:04:30.298388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.924 [2024-10-28 05:04:30.298456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.924 [2024-10-28 05:04:30.298474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.924 [2024-10-28 05:04:30.298488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.924 [2024-10-28 05:04:30.298500] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.924 [2024-10-28 05:04:30.300320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.924 [2024-10-28 05:04:30.300388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.924 [2024-10-28 05:04:30.300466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:39.924 [2024-10-28 05:04:30.300469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.489 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:40.489 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:28:40.489 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:40.489 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:40.489 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:40.747 [2024-10-28 05:04:31.091004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:40.747 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.748 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:40.748 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.748 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:40.748 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.748 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:40.748 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.748 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:40.748 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.748 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:40.748 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:40.748 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.748 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:40.748 Malloc1 00:28:40.748 [2024-10-28 05:04:31.175446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.748 Malloc2 00:28:40.748 Malloc3 00:28:40.748 Malloc4 00:28:40.748 Malloc5 00:28:41.006 Malloc6 00:28:41.006 Malloc7 00:28:41.006 Malloc8 00:28:41.006 Malloc9 00:28:41.006 Malloc10 00:28:41.265 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.265 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:41.265 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:41.265 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:41.265 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2405205 00:28:41.265 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2405205 /var/tmp/bdevperf.sock 00:28:41.265 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2405205 ']' 00:28:41.265 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:41.265 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:41.265 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:41.265 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:41.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:41.266 { 00:28:41.266 "params": { 00:28:41.266 "name": "Nvme$subsystem", 00:28:41.266 "trtype": "$TEST_TRANSPORT", 00:28:41.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.266 "adrfam": "ipv4", 00:28:41.266 "trsvcid": "$NVMF_PORT", 00:28:41.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.266 "hdgst": ${hdgst:-false}, 00:28:41.266 "ddgst": ${ddgst:-false} 00:28:41.266 }, 00:28:41.266 "method": "bdev_nvme_attach_controller" 00:28:41.266 } 00:28:41.266 EOF 00:28:41.266 )") 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:41.266 { 00:28:41.266 "params": { 00:28:41.266 "name": "Nvme$subsystem", 00:28:41.266 "trtype": "$TEST_TRANSPORT", 00:28:41.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.266 "adrfam": "ipv4", 00:28:41.266 "trsvcid": "$NVMF_PORT", 00:28:41.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.266 "hdgst": ${hdgst:-false}, 00:28:41.266 "ddgst": ${ddgst:-false} 00:28:41.266 }, 00:28:41.266 "method": "bdev_nvme_attach_controller" 00:28:41.266 } 00:28:41.266 EOF 00:28:41.266 )") 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:41.266 { 00:28:41.266 "params": { 00:28:41.266 "name": "Nvme$subsystem", 00:28:41.266 "trtype": "$TEST_TRANSPORT", 00:28:41.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.266 "adrfam": "ipv4", 00:28:41.266 "trsvcid": "$NVMF_PORT", 00:28:41.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.266 "hdgst": ${hdgst:-false}, 00:28:41.266 "ddgst": ${ddgst:-false} 00:28:41.266 }, 00:28:41.266 "method": "bdev_nvme_attach_controller" 00:28:41.266 } 00:28:41.266 EOF 00:28:41.266 )") 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:41.266 { 00:28:41.266 "params": { 00:28:41.266 "name": "Nvme$subsystem", 00:28:41.266 "trtype": "$TEST_TRANSPORT", 00:28:41.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.266 "adrfam": "ipv4", 00:28:41.266 "trsvcid": "$NVMF_PORT", 00:28:41.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.266 "hdgst": ${hdgst:-false}, 00:28:41.266 "ddgst": ${ddgst:-false} 00:28:41.266 }, 00:28:41.266 "method": "bdev_nvme_attach_controller" 00:28:41.266 } 00:28:41.266 EOF 00:28:41.266 )") 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:41.266 { 00:28:41.266 "params": { 00:28:41.266 "name": "Nvme$subsystem", 00:28:41.266 "trtype": "$TEST_TRANSPORT", 00:28:41.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.266 "adrfam": "ipv4", 00:28:41.266 "trsvcid": "$NVMF_PORT", 00:28:41.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.266 "hdgst": ${hdgst:-false}, 00:28:41.266 "ddgst": ${ddgst:-false} 00:28:41.266 }, 00:28:41.266 "method": "bdev_nvme_attach_controller" 00:28:41.266 } 00:28:41.266 EOF 00:28:41.266 )") 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:41.266 { 00:28:41.266 "params": { 00:28:41.266 "name": "Nvme$subsystem", 00:28:41.266 "trtype": "$TEST_TRANSPORT", 00:28:41.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.266 "adrfam": "ipv4", 00:28:41.266 "trsvcid": "$NVMF_PORT", 00:28:41.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.266 "hdgst": ${hdgst:-false}, 00:28:41.266 "ddgst": ${ddgst:-false} 00:28:41.266 }, 00:28:41.266 "method": "bdev_nvme_attach_controller" 00:28:41.266 } 00:28:41.266 EOF 00:28:41.266 )") 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:41.266 { 00:28:41.266 "params": { 00:28:41.266 "name": "Nvme$subsystem", 00:28:41.266 "trtype": "$TEST_TRANSPORT", 00:28:41.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.266 "adrfam": "ipv4", 00:28:41.266 "trsvcid": "$NVMF_PORT", 00:28:41.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.266 "hdgst": ${hdgst:-false}, 00:28:41.266 "ddgst": ${ddgst:-false} 00:28:41.266 }, 00:28:41.266 "method": "bdev_nvme_attach_controller" 00:28:41.266 } 00:28:41.266 EOF 00:28:41.266 )") 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:41.266 { 00:28:41.266 "params": { 00:28:41.266 "name": "Nvme$subsystem", 00:28:41.266 "trtype": "$TEST_TRANSPORT", 00:28:41.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.266 "adrfam": "ipv4", 00:28:41.266 "trsvcid": "$NVMF_PORT", 00:28:41.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.266 "hdgst": ${hdgst:-false}, 00:28:41.266 "ddgst": ${ddgst:-false} 00:28:41.266 }, 00:28:41.266 "method": "bdev_nvme_attach_controller" 00:28:41.266 } 00:28:41.266 EOF 00:28:41.266 )") 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:41.266 { 00:28:41.266 "params": { 00:28:41.266 "name": "Nvme$subsystem", 00:28:41.266 "trtype": "$TEST_TRANSPORT", 00:28:41.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.266 "adrfam": "ipv4", 00:28:41.266 "trsvcid": "$NVMF_PORT", 00:28:41.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.266 "hdgst": ${hdgst:-false}, 00:28:41.266 "ddgst": ${ddgst:-false} 00:28:41.266 }, 00:28:41.266 "method": "bdev_nvme_attach_controller" 00:28:41.266 } 00:28:41.266 EOF 00:28:41.266 )") 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:41.266 { 00:28:41.266 "params": { 00:28:41.266 "name": "Nvme$subsystem", 00:28:41.266 "trtype": "$TEST_TRANSPORT", 00:28:41.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.266 "adrfam": "ipv4", 00:28:41.266 "trsvcid": "$NVMF_PORT", 00:28:41.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.266 "hdgst": ${hdgst:-false}, 00:28:41.266 "ddgst": ${ddgst:-false} 00:28:41.266 }, 00:28:41.266 "method": "bdev_nvme_attach_controller" 00:28:41.266 } 00:28:41.266 EOF 00:28:41.266 )") 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:28:41.266 05:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:41.266 "params": { 00:28:41.267 "name": "Nvme1", 00:28:41.267 "trtype": "tcp", 00:28:41.267 "traddr": "10.0.0.2", 00:28:41.267 "adrfam": "ipv4", 00:28:41.267 "trsvcid": "4420", 00:28:41.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:41.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:41.267 "hdgst": false, 00:28:41.267 "ddgst": false 00:28:41.267 }, 00:28:41.267 "method": "bdev_nvme_attach_controller" 00:28:41.267 },{ 00:28:41.267 "params": { 00:28:41.267 "name": "Nvme2", 00:28:41.267 "trtype": "tcp", 00:28:41.267 "traddr": "10.0.0.2", 00:28:41.267 "adrfam": "ipv4", 00:28:41.267 "trsvcid": "4420", 00:28:41.267 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:41.267 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:41.267 "hdgst": false, 00:28:41.267 "ddgst": false 00:28:41.267 }, 00:28:41.267 "method": "bdev_nvme_attach_controller" 00:28:41.267 },{ 00:28:41.267 "params": { 00:28:41.267 "name": "Nvme3", 00:28:41.267 "trtype": "tcp", 00:28:41.267 "traddr": "10.0.0.2", 00:28:41.267 "adrfam": "ipv4", 00:28:41.267 "trsvcid": "4420", 00:28:41.267 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:41.267 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:41.267 "hdgst": false, 00:28:41.267 "ddgst": false 00:28:41.267 }, 00:28:41.267 "method": "bdev_nvme_attach_controller" 00:28:41.267 },{ 00:28:41.267 "params": { 00:28:41.267 "name": "Nvme4", 00:28:41.267 "trtype": "tcp", 00:28:41.267 "traddr": "10.0.0.2", 00:28:41.267 "adrfam": "ipv4", 00:28:41.267 "trsvcid": "4420", 00:28:41.267 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:41.267 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:41.267 "hdgst": false, 00:28:41.267 "ddgst": false 00:28:41.267 }, 00:28:41.267 "method": "bdev_nvme_attach_controller" 00:28:41.267 },{ 00:28:41.267 "params": { 00:28:41.267 "name": "Nvme5", 00:28:41.267 "trtype": "tcp", 00:28:41.267 "traddr": "10.0.0.2", 00:28:41.267 "adrfam": "ipv4", 00:28:41.267 "trsvcid": "4420", 00:28:41.267 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:41.267 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:41.267 "hdgst": false, 00:28:41.267 "ddgst": false 00:28:41.267 }, 00:28:41.267 "method": "bdev_nvme_attach_controller" 00:28:41.267 },{ 00:28:41.267 "params": { 00:28:41.267 "name": "Nvme6", 00:28:41.267 "trtype": "tcp", 00:28:41.267 "traddr": "10.0.0.2", 00:28:41.267 "adrfam": "ipv4", 00:28:41.267 "trsvcid": "4420", 00:28:41.267 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:41.267 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:41.267 "hdgst": false, 00:28:41.267 "ddgst": false 00:28:41.267 }, 00:28:41.267 "method": "bdev_nvme_attach_controller" 00:28:41.267 },{ 00:28:41.267 "params": { 00:28:41.267 "name": "Nvme7", 00:28:41.267 "trtype": "tcp", 00:28:41.267 "traddr": "10.0.0.2", 00:28:41.267 "adrfam": "ipv4", 00:28:41.267 "trsvcid": "4420", 00:28:41.267 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:41.267 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:41.267 "hdgst": false, 00:28:41.267 "ddgst": false 00:28:41.267 }, 00:28:41.267 "method": "bdev_nvme_attach_controller" 00:28:41.267 },{ 00:28:41.267 "params": { 00:28:41.267 "name": "Nvme8", 00:28:41.267 "trtype": "tcp", 00:28:41.267 "traddr": "10.0.0.2", 00:28:41.267 "adrfam": "ipv4", 00:28:41.267 "trsvcid": "4420", 00:28:41.267 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:41.267 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:41.267 "hdgst": false, 00:28:41.267 "ddgst": false 00:28:41.267 }, 00:28:41.267 "method": "bdev_nvme_attach_controller" 00:28:41.267 },{ 00:28:41.267 "params": { 00:28:41.267 "name": "Nvme9", 00:28:41.267 "trtype": "tcp", 00:28:41.267 "traddr": "10.0.0.2", 00:28:41.267 "adrfam": "ipv4", 00:28:41.267 "trsvcid": "4420", 00:28:41.267 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:41.267 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:41.267 "hdgst": false, 00:28:41.267 "ddgst": false 00:28:41.267 }, 00:28:41.267 "method": "bdev_nvme_attach_controller" 00:28:41.267 },{ 00:28:41.267 "params": { 00:28:41.267 "name": "Nvme10", 00:28:41.267 "trtype": "tcp", 00:28:41.267 "traddr": "10.0.0.2", 00:28:41.267 "adrfam": "ipv4", 00:28:41.267 "trsvcid": "4420", 00:28:41.267 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:41.267 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:41.267 "hdgst": false, 00:28:41.267 "ddgst": false 00:28:41.267 }, 00:28:41.267 "method": "bdev_nvme_attach_controller" 00:28:41.267 }' 00:28:41.267 [2024-10-28 05:04:31.683010] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:28:41.267 [2024-10-28 05:04:31.683085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2405205 ] 00:28:41.267 [2024-10-28 05:04:31.817168] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:41.267 [2024-10-28 05:04:31.854910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.525 [2024-10-28 05:04:31.902359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.424 Running I/O for 10 seconds... 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2404903 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2404903 ']' 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2404903 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:44.029 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2404903 00:28:44.334 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:44.334 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:44.334 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2404903' 00:28:44.334 killing process with pid 2404903 00:28:44.334 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2404903 00:28:44.334 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2404903 00:28:44.334 [2024-10-28 05:04:34.656134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.334 [2024-10-28 05:04:34.656422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.656997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.657009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.657021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.657033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ff40 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.335 [2024-10-28 05:04:34.658907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.658919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.658932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.658967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.658980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.658996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.659215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14529c0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.336 [2024-10-28 05:04:34.660306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.336 [2024-10-28 05:04:34.660329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.336 [2024-10-28 05:04:34.660344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.336 [2024-10-28 05:04:34.660359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.336 [2024-10-28 05:04:34.660374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.336 [2024-10-28 05:04:34.660389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.336 [2024-10-28 05:04:34.660404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.336 [2024-10-28 05:04:34.660424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258d9b0 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.336 [2024-10-28 05:04:34.660566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.336 [2024-10-28 05:04:34.660582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.336 [2024-10-28 05:04:34.660596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.336 [2024-10-28 05:04:34.660610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.336 [2024-10-28 05:04:34.660639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.336 [2024-10-28 05:04:34.660640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.336 [2024-10-28 05:04:34.660665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.336 [2024-10-28 05:04:34.660679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213d230 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.660993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.336 [2024-10-28 05:04:34.661225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.661448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450430 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.668982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.669278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450df0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.670121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.670153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.670169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.670182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.670196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.670209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.670222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.670235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.670247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.670260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.670281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.670295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.670308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.670321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.670333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.337 [2024-10-28 05:04:34.670345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.670988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.671015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.671029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14512c0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.672105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14517b0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.672137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14517b0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.672152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14517b0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.672171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14517b0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.672184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14517b0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.672211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14517b0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.672224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14517b0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.672235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14517b0 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.338 [2024-10-28 05:04:34.673688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.673983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.674006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.674018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.674031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.674043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.674055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.674068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.674080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.674092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.674104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451b30 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.339 [2024-10-28 05:04:34.675685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.675984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452000 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.676989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.677342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524d0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.679064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.340 [2024-10-28 05:04:34.679100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.340 [2024-10-28 05:04:34.679126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.340 [2024-10-28 05:04:34.679140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.340 [2024-10-28 05:04:34.679155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.340 [2024-10-28 05:04:34.679168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.340 [2024-10-28 05:04:34.679181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.340 [2024-10-28 05:04:34.679195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.340 [2024-10-28 05:04:34.679209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258ace0 is same with the state(6) to be set 00:28:44.340 [2024-10-28 05:04:34.679256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x258d9b0 (9): Bad file descriptor 00:28:44.340 [2024-10-28 05:04:34.679313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.340 [2024-10-28 05:04:34.679377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.340 [2024-10-28 05:04:34.679398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.679412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.679427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.679440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.679455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.679468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.679481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21326a0 is same with the state(6) to be set 00:28:44.341 [2024-10-28 05:04:34.679529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.679550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.679565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.679579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.679593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.679607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.679630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.679654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.679668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2589e50 is same with the state(6) to be set 00:28:44.341 [2024-10-28 05:04:34.679718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.679779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.679801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.679816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.679830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.679844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.679863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.679878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.679892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132fd0 is same with the state(6) to be set 00:28:44.341 [2024-10-28 05:04:34.679948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.679969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.679984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.679998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.680013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.680027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.680041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.680055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.680069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2568730 is same with the state(6) to be set 00:28:44.341 [2024-10-28 05:04:34.680113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.680134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.680150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.680164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.680178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.680192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.680206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.680219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.680232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213cdd0 is same with the state(6) to be set 00:28:44.341 [2024-10-28 05:04:34.680263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213d230 (9): Bad file descriptor 00:28:44.341 [2024-10-28 05:04:34.680312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.680333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.680349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.680363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.680383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.680398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.680412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.680426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.680439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132ae0 is same with the state(6) to be set 00:28:44.341 [2024-10-28 05:04:34.680487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.680508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.680523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.680537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.680551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.680564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.680578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.341 [2024-10-28 05:04:34.680592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.680606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213abf0 is same with the state(6) to be set 00:28:44.341 [2024-10-28 05:04:34.681449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.341 [2024-10-28 05:04:34.681477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.681508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.341 [2024-10-28 05:04:34.681523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.681540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.341 [2024-10-28 05:04:34.681555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.681570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.341 [2024-10-28 05:04:34.681584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.681600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.341 [2024-10-28 05:04:34.681614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.681651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.341 [2024-10-28 05:04:34.681674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.681692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.341 [2024-10-28 05:04:34.681707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.681722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.341 [2024-10-28 05:04:34.681737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.341 [2024-10-28 05:04:34.681752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.681767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.681783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.681796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.681813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.681827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.681843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.681857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.681873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.681887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.681902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.681916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.681940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.681954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.681969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.681983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.682974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.682988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.683003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-28 05:04:34.683017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.342 [2024-10-28 05:04:34.683033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.343 [2024-10-28 05:04:34.683707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.683983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.683997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.684013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.684027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.684042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.684056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.684072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.684086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.684101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.684115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.684131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.684145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.684161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.684174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.684189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.684203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.684223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.684238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.684253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.684267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.343 [2024-10-28 05:04:34.684282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-28 05:04:34.684296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.684980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.684999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.344 [2024-10-28 05:04:34.685537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.344 [2024-10-28 05:04:34.685552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.685567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.685581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.685596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.685610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.685631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.685653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.685669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.685683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.685954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.685985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.686981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.686995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.687010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.687024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.687040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.687054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.687070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.687084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.687100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.687114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.687130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.687144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.687159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.687173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.345 [2024-10-28 05:04:34.687190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.345 [2024-10-28 05:04:34.687204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.687982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.687998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.688012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.688027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.688041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.688056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.688070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.710235] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:44.346 [2024-10-28 05:04:34.710327] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:44.346 [2024-10-28 05:04:34.710372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2132ae0 (9): Bad file descriptor 00:28:44.346 [2024-10-28 05:04:34.710398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2568730 (9): Bad file descriptor 00:28:44.346 [2024-10-28 05:04:34.710422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x258ace0 (9): Bad file descriptor 00:28:44.346 [2024-10-28 05:04:34.710464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21326a0 (9): Bad file descriptor 00:28:44.346 [2024-10-28 05:04:34.710496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2589e50 (9): Bad file descriptor 00:28:44.346 [2024-10-28 05:04:34.710528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2132fd0 (9): Bad file descriptor 00:28:44.346 [2024-10-28 05:04:34.710561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213cdd0 (9): Bad file descriptor 00:28:44.346 [2024-10-28 05:04:34.710599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213abf0 (9): Bad file descriptor 00:28:44.346 [2024-10-28 05:04:34.711080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.711119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.711165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.711193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.711223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.711244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.711261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.711276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.711291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.711305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.711321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.711336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.711352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.711366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.346 [2024-10-28 05:04:34.711382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-10-28 05:04:34.711396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.711976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.711990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.347 [2024-10-28 05:04:34.712493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.347 [2024-10-28 05:04:34.712507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.712523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.712537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.712553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.712567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.712586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.712601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.712617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.712632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.712656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.712671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.712687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.712701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.712717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.712731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.712747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.712761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.712777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.712791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.712807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.712821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.712837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.712851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.712867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.712881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.712897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.712910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.712927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.712941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.712957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.712975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.712991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.713006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.713022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.713036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.713052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.713066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.713081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.713095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.713111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.713125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.713557] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:44.348 [2024-10-28 05:04:34.713716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.713741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.713763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.713778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.713795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.713809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.713825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.713839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.713855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.713870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.713893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.713907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.713923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.713939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.713960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.713976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.713991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.714006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.714022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.714036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.714052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.714067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.714084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.714098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.714114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.714128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.714144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.714158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.714174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.714188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.714204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.714228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.714243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.714257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.714273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.714287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.348 [2024-10-28 05:04:34.714303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.348 [2024-10-28 05:04:34.714316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.714978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.714994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.349 [2024-10-28 05:04:34.715539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.349 [2024-10-28 05:04:34.715553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.715569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.715584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.715600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.715614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.715639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.715656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.715672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.715686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.715702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.715716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.715733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.715747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.718835] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:44.350 [2024-10-28 05:04:34.718938] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:44.350 [2024-10-28 05:04:34.719334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.719974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.719988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.720004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.720019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.720035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.720049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.720065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.720079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.720095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.720109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.720125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.720139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.720155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.720169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.720186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.720199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.720215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.720229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.720246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.720260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.720276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.720290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.720310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.720325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.720341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.720355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.720372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.720387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.350 [2024-10-28 05:04:34.720403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.350 [2024-10-28 05:04:34.720418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.720971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.720986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.721002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.721016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.721032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.721046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.721062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.721076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.721096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.721111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.721127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.721141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.721158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.721172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.721189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.721203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.721219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.721233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.721249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.721263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.721279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.721293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.721309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.721323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.721339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.351 [2024-10-28 05:04:34.721353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.351 [2024-10-28 05:04:34.721368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23449e0 is same with the state(6) to be set 00:28:44.351 [2024-10-28 05:04:34.722590] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:44.351 [2024-10-28 05:04:34.722640] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:44.351 [2024-10-28 05:04:34.722663] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:44.351 [2024-10-28 05:04:34.722897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.351 [2024-10-28 05:04:34.722928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2568730 with addr=10.0.0.2, port=4420 00:28:44.351 [2024-10-28 05:04:34.722946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2568730 is same with the state(6) to be set 00:28:44.351 [2024-10-28 05:04:34.723063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.351 [2024-10-28 05:04:34.723096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2132ae0 with addr=10.0.0.2, port=4420 00:28:44.351 [2024-10-28 05:04:34.723117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132ae0 is same with the state(6) to be set 00:28:44.351 [2024-10-28 05:04:34.723165] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:44.351 [2024-10-28 05:04:34.723308] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:44.351 [2024-10-28 05:04:34.723450] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:44.351 [2024-10-28 05:04:34.723550] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:44.351 [2024-10-28 05:04:34.723751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.351 [2024-10-28 05:04:34.723782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x258ace0 with addr=10.0.0.2, port=4420 00:28:44.351 [2024-10-28 05:04:34.723799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258ace0 is same with the state(6) to be set 00:28:44.351 [2024-10-28 05:04:34.723905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.351 [2024-10-28 05:04:34.723937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213d230 with addr=10.0.0.2, port=4420 00:28:44.351 [2024-10-28 05:04:34.723953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213d230 is same with the state(6) to be set 00:28:44.351 [2024-10-28 05:04:34.724132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.352 [2024-10-28 05:04:34.724158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213abf0 with addr=10.0.0.2, port=4420 00:28:44.352 [2024-10-28 05:04:34.724174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213abf0 is same with the state(6) to be set 00:28:44.352 [2024-10-28 05:04:34.724198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2568730 (9): Bad file descriptor 00:28:44.352 [2024-10-28 05:04:34.724218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2132ae0 (9): Bad file descriptor 00:28:44.352 [2024-10-28 05:04:34.724535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.724570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.724592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.724608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.724646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.724665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.724681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.724695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.724711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.724726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.724742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.724761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.724778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.724793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.724809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.724823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.724839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.724854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.724869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.724884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.724900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.724914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.724930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.724945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.724961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.724975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.724991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.352 [2024-10-28 05:04:34.725609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.352 [2024-10-28 05:04:34.725623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.725646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.725663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.725679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.725693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.725710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.725724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.725740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.725755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.725771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.725785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.725802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.725817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.725834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.725848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.725865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.725880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.725896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.725910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.725926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.725944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.725961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.725975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.725990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.726550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.726566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2342860 is same with the state(6) to be set 00:28:44.353 [2024-10-28 05:04:34.728106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.728132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.728160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.728175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.728193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.728206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.728222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.728237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.728258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.728274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.728290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.728304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.728320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.728335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.728350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.728365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.728381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.728396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.728412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.728426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.353 [2024-10-28 05:04:34.728443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.353 [2024-10-28 05:04:34.728457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.728985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.728999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.354 [2024-10-28 05:04:34.729753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.354 [2024-10-28 05:04:34.729768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.729783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.729799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.729813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.729833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.729849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.729864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.729878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.729894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.729908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.729925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.729939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.729955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.729970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.729987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.730002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.730018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.730032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.730048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.730063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.730078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.730092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.730109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.730123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.730137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x253f0b0 is same with the state(6) to be set 00:28:44.355 [2024-10-28 05:04:34.731396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.731968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.731985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.732000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.732015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.732029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.732046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.732061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.732076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.732091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.732107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.732122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.732139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.732153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.732169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.732184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.732200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.732215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.732231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.732246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.355 [2024-10-28 05:04:34.732269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.355 [2024-10-28 05:04:34.732286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.732978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.732992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.733008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.733022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.733038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.733052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.733068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.733083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.733103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.733119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.733135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.733149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.733166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.733180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.733197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.733211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.733229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.733244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.733261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.733276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.733292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.733307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.733323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.733338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.733356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.733370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.733385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.733400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.356 [2024-10-28 05:04:34.733417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.356 [2024-10-28 05:04:34.733432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.733449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.733463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.733477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2541ba0 is same with the state(6) to be set 00:28:44.357 [2024-10-28 05:04:34.734739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.734762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.734784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.734799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.734816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.734832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.734848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.734863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.734879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.734894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.734910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.734925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.734941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.734955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.734980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.734998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.735968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.735982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.736002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.736017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.357 [2024-10-28 05:04:34.736033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.357 [2024-10-28 05:04:34.736060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.358 [2024-10-28 05:04:34.736825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.358 [2024-10-28 05:04:34.736839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25430d0 is same with the state(6) to be set 00:28:44.358 [2024-10-28 05:04:34.738795] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:44.358 [2024-10-28 05:04:34.738830] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:44.358 [2024-10-28 05:04:34.738851] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:44.358 task offset: 24576 on job bdev=Nvme4n1 fails 00:28:44.358 00:28:44.358 Latency(us) 00:28:44.358 [2024-10-28T04:04:34.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.358 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:44.358 Job: Nvme1n1 ended in about 0.98 seconds with error 00:28:44.358 Verification LBA range: start 0x0 length 0x400 00:28:44.358 Nvme1n1 : 0.98 130.56 8.16 65.28 0.00 323386.82 23747.22 306767.36 00:28:44.358 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:44.358 Job: Nvme2n1 ended in about 0.99 seconds with error 00:28:44.358 Verification LBA range: start 0x0 length 0x400 00:28:44.358 Nvme2n1 : 0.99 129.14 8.07 64.57 0.00 320814.56 38735.22 263165.91 00:28:44.358 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:44.358 Job: Nvme3n1 ended in about 0.98 seconds with error 00:28:44.358 Verification LBA range: start 0x0 length 0x400 00:28:44.358 Nvme3n1 : 0.98 195.59 12.22 65.20 0.00 233435.50 10900.36 264723.10 00:28:44.358 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:44.358 Job: Nvme4n1 ended in about 0.97 seconds with error 00:28:44.358 Verification LBA range: start 0x0 length 0x400 00:28:44.358 Nvme4n1 : 0.97 197.73 12.36 65.91 0.00 226126.08 20146.21 281852.25 00:28:44.358 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:44.358 Job: Nvme5n1 ended in about 0.99 seconds with error 00:28:44.358 Verification LBA range: start 0x0 length 0x400 00:28:44.358 Nvme5n1 : 0.99 128.68 8.04 64.34 0.00 303288.76 22579.32 281852.25 00:28:44.358 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:44.358 Job: Nvme6n1 ended in about 0.97 seconds with error 00:28:44.358 Verification LBA range: start 0x0 length 0x400 00:28:44.358 Nvme6n1 : 0.97 197.48 12.34 65.83 0.00 217086.86 29586.70 266280.30 00:28:44.358 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:44.358 Job: Nvme7n1 ended in about 1.00 seconds with error 00:28:44.358 Verification LBA range: start 0x0 length 0x400 00:28:44.358 Nvme7n1 : 1.00 128.25 8.02 64.12 0.00 291834.61 39124.52 263165.91 00:28:44.358 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:44.358 Job: Nvme8n1 ended in about 1.00 seconds with error 00:28:44.358 Verification LBA range: start 0x0 length 0x400 00:28:44.358 Nvme8n1 : 1.00 127.82 7.99 63.91 0.00 286876.12 17907.74 283409.44 00:28:44.358 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:44.358 Job: Nvme9n1 ended in about 0.97 seconds with error 00:28:44.358 Verification LBA range: start 0x0 length 0x400 00:28:44.358 Nvme9n1 : 0.97 131.47 8.22 65.74 0.00 271570.81 21703.40 320782.11 00:28:44.358 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:44.358 Job: Nvme10n1 ended in about 0.99 seconds with error 00:28:44.358 Verification LBA range: start 0x0 length 0x400 00:28:44.358 Nvme10n1 : 0.99 129.82 8.11 64.91 0.00 269755.35 18491.69 291195.41 00:28:44.358 [2024-10-28T04:04:34.954Z] =================================================================================================================== 00:28:44.358 [2024-10-28T04:04:34.954Z] Total : 1496.54 93.53 649.80 0.00 269975.00 10900.36 320782.11 00:28:44.358 [2024-10-28 05:04:34.764936] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:44.358 [2024-10-28 05:04:34.765040] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:44.358 [2024-10-28 05:04:34.765329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.358 [2024-10-28 05:04:34.765367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x258d9b0 with addr=10.0.0.2, port=4420 00:28:44.358 [2024-10-28 05:04:34.765390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258d9b0 is same with the state(6) to be set 00:28:44.358 [2024-10-28 05:04:34.765420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x258ace0 (9): Bad file descriptor 00:28:44.359 [2024-10-28 05:04:34.765445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213d230 (9): Bad file descriptor 00:28:44.359 [2024-10-28 05:04:34.765464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213abf0 (9): Bad file descriptor 00:28:44.359 [2024-10-28 05:04:34.765482] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:44.359 [2024-10-28 05:04:34.765496] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:44.359 [2024-10-28 05:04:34.765512] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:44.359 [2024-10-28 05:04:34.765541] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:44.359 [2024-10-28 05:04:34.765556] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:44.359 [2024-10-28 05:04:34.765570] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:44.359 [2024-10-28 05:04:34.765601] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:44.359 [2024-10-28 05:04:34.765625] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:28:44.359 [2024-10-28 05:04:34.765684] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:28:44.359 [2024-10-28 05:04:34.765711] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:44.359 [2024-10-28 05:04:34.765730] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:28:44.359 [2024-10-28 05:04:34.765752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x258d9b0 (9): Bad file descriptor 00:28:44.359 [2024-10-28 05:04:34.765905] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:44.359 [2024-10-28 05:04:34.765944] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:44.359 [2024-10-28 05:04:34.766129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.359 [2024-10-28 05:04:34.766159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213cdd0 with addr=10.0.0.2, port=4420 00:28:44.359 [2024-10-28 05:04:34.766178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213cdd0 is same with the state(6) to be set 00:28:44.359 [2024-10-28 05:04:34.766305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.359 [2024-10-28 05:04:34.766334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21326a0 with addr=10.0.0.2, port=4420 00:28:44.359 [2024-10-28 05:04:34.766361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21326a0 is same with the state(6) to be set 00:28:44.359 [2024-10-28 05:04:34.766477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.359 [2024-10-28 05:04:34.766504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2132fd0 with addr=10.0.0.2, port=4420 00:28:44.359 [2024-10-28 05:04:34.766522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132fd0 is same with the state(6) to be set 00:28:44.359 [2024-10-28 05:04:34.766649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.359 [2024-10-28 05:04:34.766677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2589e50 with addr=10.0.0.2, port=4420 00:28:44.359 [2024-10-28 05:04:34.766695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2589e50 is same with the state(6) to be set 00:28:44.359 [2024-10-28 05:04:34.766713] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:44.359 [2024-10-28 05:04:34.766729] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:44.359 [2024-10-28 05:04:34.766743] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:44.359 [2024-10-28 05:04:34.766763] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:44.359 [2024-10-28 05:04:34.766778] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:44.359 [2024-10-28 05:04:34.766792] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:44.359 [2024-10-28 05:04:34.766827] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:44.359 [2024-10-28 05:04:34.766844] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:44.359 [2024-10-28 05:04:34.766863] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:44.359 [2024-10-28 05:04:34.766929] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:28:44.359 [2024-10-28 05:04:34.766957] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:44.359 [2024-10-28 05:04:34.766979] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:28:44.359 [2024-10-28 05:04:34.766998] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:44.359 [2024-10-28 05:04:34.768175] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:44.359 [2024-10-28 05:04:34.768202] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:44.359 [2024-10-28 05:04:34.768224] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:44.359 [2024-10-28 05:04:34.768253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213cdd0 (9): Bad file descriptor 00:28:44.359 [2024-10-28 05:04:34.768275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21326a0 (9): Bad file descriptor 00:28:44.359 [2024-10-28 05:04:34.768293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2132fd0 (9): Bad file descriptor 00:28:44.359 [2024-10-28 05:04:34.768312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2589e50 (9): Bad file descriptor 00:28:44.359 [2024-10-28 05:04:34.768327] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:44.359 [2024-10-28 05:04:34.768346] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:44.359 [2024-10-28 05:04:34.768360] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:44.359 [2024-10-28 05:04:34.768435] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:44.359 [2024-10-28 05:04:34.768462] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:44.359 [2024-10-28 05:04:34.768478] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:44.359 [2024-10-28 05:04:34.768510] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:44.359 [2024-10-28 05:04:34.768527] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:44.359 [2024-10-28 05:04:34.768542] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:44.359 [2024-10-28 05:04:34.768560] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:44.359 [2024-10-28 05:04:34.768575] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:44.359 [2024-10-28 05:04:34.768588] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:44.359 [2024-10-28 05:04:34.768616] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:44.359 [2024-10-28 05:04:34.768642] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:44.359 [2024-10-28 05:04:34.768662] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:44.359 [2024-10-28 05:04:34.768683] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:44.359 [2024-10-28 05:04:34.768698] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:44.359 [2024-10-28 05:04:34.768714] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:44.359 [2024-10-28 05:04:34.768785] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:44.359 [2024-10-28 05:04:34.768807] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:44.359 [2024-10-28 05:04:34.768821] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:44.359 [2024-10-28 05:04:34.768836] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:44.359 [2024-10-28 05:04:34.768951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.359 [2024-10-28 05:04:34.768983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2132ae0 with addr=10.0.0.2, port=4420 00:28:44.359 [2024-10-28 05:04:34.769001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132ae0 is same with the state(6) to be set 00:28:44.359 [2024-10-28 05:04:34.769129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.359 [2024-10-28 05:04:34.769155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2568730 with addr=10.0.0.2, port=4420 00:28:44.359 [2024-10-28 05:04:34.769172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2568730 is same with the state(6) to be set 00:28:44.359 [2024-10-28 05:04:34.769216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2132ae0 (9): Bad file descriptor 00:28:44.359 [2024-10-28 05:04:34.769240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2568730 (9): Bad file descriptor 00:28:44.359 [2024-10-28 05:04:34.769289] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:44.359 [2024-10-28 05:04:34.769308] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:44.359 [2024-10-28 05:04:34.769323] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:44.359 [2024-10-28 05:04:34.769340] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:44.359 [2024-10-28 05:04:34.769355] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:44.359 [2024-10-28 05:04:34.769369] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:44.359 [2024-10-28 05:04:34.769406] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:44.359 [2024-10-28 05:04:34.769426] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:44.619 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:45.558 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2405205 00:28:45.558 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:28:45.558 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2405205 00:28:45.558 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:28:45.558 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.558 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 2405205 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:45.817 rmmod nvme_tcp 00:28:45.817 rmmod nvme_fabrics 00:28:45.817 rmmod nvme_keyring 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 2404903 ']' 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 2404903 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2404903 ']' 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2404903 00:28:45.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2404903) - No such process 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 2404903 is not found' 00:28:45.817 Process with pid 2404903 is not found 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.817 05:04:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.723 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:47.723 00:28:47.723 real 0m8.529s 00:28:47.723 user 0m22.486s 00:28:47.723 sys 0m1.550s 00:28:47.723 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:47.723 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:47.723 ************************************ 00:28:47.723 END TEST nvmf_shutdown_tc3 00:28:47.723 ************************************ 00:28:47.723 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:47.723 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:47.723 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:47.723 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:47.723 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:47.723 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:47.983 ************************************ 00:28:47.983 START TEST nvmf_shutdown_tc4 00:28:47.983 ************************************ 00:28:47.983 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:28:47.983 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:47.983 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:47.983 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:47.983 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:47.984 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:47.984 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:47.984 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:47.984 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:47.984 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:47.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:47.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:28:47.985 00:28:47.985 --- 10.0.0.2 ping statistics --- 00:28:47.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.985 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:47.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:47.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:28:47.985 00:28:47.985 --- 10.0.0.1 ping statistics --- 00:28:47.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.985 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=2406091 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 2406091 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 2406091 ']' 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:47.985 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:48.244 [2024-10-28 05:04:38.580354] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:28:48.244 [2024-10-28 05:04:38.580432] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.244 [2024-10-28 05:04:38.719896] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:48.244 [2024-10-28 05:04:38.754376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:48.244 [2024-10-28 05:04:38.802539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.244 [2024-10-28 05:04:38.802598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.244 [2024-10-28 05:04:38.802613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.244 [2024-10-28 05:04:38.802626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.244 [2024-10-28 05:04:38.802647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.244 [2024-10-28 05:04:38.804300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.244 [2024-10-28 05:04:38.804413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.244 [2024-10-28 05:04:38.804480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:48.244 [2024-10-28 05:04:38.804483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:49.179 [2024-10-28 05:04:39.612566] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.179 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.180 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.180 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.180 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.180 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.180 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.180 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.180 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.180 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.180 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.180 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.180 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.180 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.180 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:49.180 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.180 05:04:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:49.180 Malloc1 00:28:49.180 [2024-10-28 05:04:39.712407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.180 Malloc2 00:28:49.439 Malloc3 00:28:49.439 Malloc4 00:28:49.439 Malloc5 00:28:49.439 Malloc6 00:28:49.439 Malloc7 00:28:49.698 Malloc8 00:28:49.698 Malloc9 00:28:49.698 Malloc10 00:28:49.698 05:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.698 05:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:49.698 05:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:49.698 05:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:49.698 05:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2406276 00:28:49.698 05:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:49.698 05:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:49.956 [2024-10-28 05:04:40.356335] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:55.239 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:55.239 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2406091 00:28:55.239 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 2406091 ']' 00:28:55.239 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 2406091 00:28:55.239 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:28:55.239 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:55.239 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2406091 00:28:55.239 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:55.239 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:55.239 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2406091' 00:28:55.239 killing process with pid 2406091 00:28:55.239 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 2406091 00:28:55.239 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 2406091 00:28:55.239 [2024-10-28 05:04:45.240279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23478a0 is same with the state(6) to be set 00:28:55.239 [2024-10-28 05:04:45.240363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23478a0 is same with the state(6) to be set 00:28:55.239 [2024-10-28 05:04:45.240390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23478a0 is same with the state(6) to be set 00:28:55.239 [2024-10-28 05:04:45.240405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23478a0 is same with the state(6) to be set 00:28:55.239 [2024-10-28 05:04:45.240417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23478a0 is same with the state(6) to be set 00:28:55.239 [2024-10-28 05:04:45.240429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23478a0 is same with the state(6) to be set 00:28:55.239 [2024-10-28 05:04:45.241384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347d70 is same with the state(6) to be set 00:28:55.239 [2024-10-28 05:04:45.241456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347d70 is same with the state(6) to be set 00:28:55.239 [2024-10-28 05:04:45.241491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347d70 is same with the state(6) to be set 00:28:55.239 [2024-10-28 05:04:45.241505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347d70 is same with the state(6) to be set 00:28:55.239 [2024-10-28 05:04:45.241540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347d70 is same with the state(6) to be set 00:28:55.239 [2024-10-28 05:04:45.241558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347d70 is same with the state(6) to be set 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 [2024-10-28 05:04:45.248881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.239 Write completed with error (sct=0, sc=8) 00:28:55.239 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 [2024-10-28 05:04:45.250068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 [2024-10-28 05:04:45.251180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 [2024-10-28 05:04:45.251559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d67c0 is same with Write completed with error (sct=0, sc=8) 00:28:55.240 the state(6) to be set 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 [2024-10-28 05:04:45.251598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d67c0 is same with the state(6) to be set 00:28:55.240 starting I/O failed: -6 00:28:55.240 [2024-10-28 05:04:45.251614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d67c0 is same with the state(6) to be set 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 [2024-10-28 05:04:45.251629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d67c0 is same with the state(6) to be set 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 [2024-10-28 05:04:45.251661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d67c0 is same with starting I/O failed: -6 00:28:55.240 the state(6) to be set 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 [2024-10-28 05:04:45.252204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6c90 is same with the state(6) to be set 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.240 Write completed with error (sct=0, sc=8) 00:28:55.240 starting I/O failed: -6 00:28:55.241 [2024-10-28 05:04:45.252237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6c90 is same with the state(6) to be set 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 [2024-10-28 05:04:45.252263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6c90 is same with the state(6) to be set 00:28:55.241 starting I/O failed: -6 00:28:55.241 [2024-10-28 05:04:45.252276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6c90 is same with Write completed with error (sct=0, sc=8) 00:28:55.241 the state(6) to be set 00:28:55.241 starting I/O failed: -6 00:28:55.241 [2024-10-28 05:04:45.252295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6c90 is same with the state(6) to be set 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 [2024-10-28 05:04:45.252308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6c90 is same with starting I/O failed: -6 00:28:55.241 the state(6) to be set 00:28:55.241 [2024-10-28 05:04:45.252322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6c90 is same with Write completed with error (sct=0, sc=8) 00:28:55.241 the state(6) to be set 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 [2024-10-28 05:04:45.252645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d7160 is same with starting I/O failed: -6 00:28:55.241 the state(6) to be set 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 [2024-10-28 05:04:45.252680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d7160 is same with the state(6) to be set 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 [2024-10-28 05:04:45.252696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d7160 is same with the state(6) to be set 00:28:55.241 starting I/O failed: -6 00:28:55.241 [2024-10-28 05:04:45.252709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d7160 is same with the state(6) to be set 00:28:55.241 [2024-10-28 05:04:45.252721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d7160 is same with the state(6) to be set 00:28:55.241 [2024-10-28 05:04:45.252733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d7160 is same with the state(6) to be set 00:28:55.241 [2024-10-28 05:04:45.252745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d7160 is same with the state(6) to be set 00:28:55.241 [2024-10-28 05:04:45.252757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d7160 is same with the state(6) to be set 00:28:55.241 [2024-10-28 05:04:45.252858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:55.241 NVMe io qpair process completion error 00:28:55.241 [2024-10-28 05:04:45.253149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d62f0 is same with the state(6) to be set 00:28:55.241 [2024-10-28 05:04:45.253182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d62f0 is same with the state(6) to be set 00:28:55.241 [2024-10-28 05:04:45.253198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d62f0 is same with the state(6) to be set 00:28:55.241 [2024-10-28 05:04:45.253211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d62f0 is same with the state(6) to be set 00:28:55.241 [2024-10-28 05:04:45.253223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d62f0 is same with the state(6) to be set 00:28:55.241 [2024-10-28 05:04:45.253235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d62f0 is same with the state(6) to be set 00:28:55.241 [2024-10-28 05:04:45.253251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d62f0 is same with the state(6) to be set 00:28:55.241 [2024-10-28 05:04:45.253274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d62f0 is same with the state(6) to be set 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 [2024-10-28 05:04:45.257574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 [2024-10-28 05:04:45.258627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.241 starting I/O failed: -6 00:28:55.241 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 [2024-10-28 05:04:45.259775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:55.242 NVMe io qpair process completion error 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 [2024-10-28 05:04:45.261004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 [2024-10-28 05:04:45.262010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d8e80 is same with the state(6) to be set 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 [2024-10-28 05:04:45.262045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d8e80 is same with the state(6) to be set 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 [2024-10-28 05:04:45.262064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d8e80 is same with the state(6) to be set 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 [2024-10-28 05:04:45.262076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d8e80 is same with the state(6) to be set 00:28:55.242 starting I/O failed: -6 00:28:55.242 [2024-10-28 05:04:45.262089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d8e80 is same with Write completed with error (sct=0, sc=8) 00:28:55.242 the state(6) to be set 00:28:55.242 starting I/O failed: -6 00:28:55.242 [2024-10-28 05:04:45.262135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 starting I/O failed: -6 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.242 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 [2024-10-28 05:04:45.262902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9820 is same with Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 the state(6) to be set 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 [2024-10-28 05:04:45.262946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9820 is same with the state(6) to be set 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 [2024-10-28 05:04:45.262961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9820 is same with the state(6) to be set 00:28:55.243 [2024-10-28 05:04:45.262974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9820 is same with the state(6) to be set 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 [2024-10-28 05:04:45.262987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9820 is same with the state(6) to be set 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 [2024-10-28 05:04:45.262999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9820 is same with the state(6) to be set 00:28:55.243 starting I/O failed: -6 00:28:55.243 [2024-10-28 05:04:45.263011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9820 is same with the state(6) to be set 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 [2024-10-28 05:04:45.263023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9820 is same with starting I/O failed: -6 00:28:55.243 the state(6) to be set 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 [2024-10-28 05:04:45.263266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 [2024-10-28 05:04:45.264895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:55.243 NVMe io qpair process completion error 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.243 starting I/O failed: -6 00:28:55.243 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 [2024-10-28 05:04:45.266157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 [2024-10-28 05:04:45.267324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 [2024-10-28 05:04:45.268509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.244 Write completed with error (sct=0, sc=8) 00:28:55.244 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 [2024-10-28 05:04:45.270478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:55.245 NVMe io qpair process completion error 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 [2024-10-28 05:04:45.271704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 [2024-10-28 05:04:45.272750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 Write completed with error (sct=0, sc=8) 00:28:55.245 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 [2024-10-28 05:04:45.273913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 [2024-10-28 05:04:45.276265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:55.246 NVMe io qpair process completion error 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 Write completed with error (sct=0, sc=8) 00:28:55.246 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 [2024-10-28 05:04:45.277698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 [2024-10-28 05:04:45.278800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 [2024-10-28 05:04:45.279937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.247 Write completed with error (sct=0, sc=8) 00:28:55.247 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 [2024-10-28 05:04:45.282393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:55.248 NVMe io qpair process completion error 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 [2024-10-28 05:04:45.283706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 [2024-10-28 05:04:45.284788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.248 Write completed with error (sct=0, sc=8) 00:28:55.248 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 [2024-10-28 05:04:45.285931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 [2024-10-28 05:04:45.288509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:55.249 NVMe io qpair process completion error 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 starting I/O failed: -6 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.249 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 [2024-10-28 05:04:45.289810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 [2024-10-28 05:04:45.290792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 [2024-10-28 05:04:45.291943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.250 Write completed with error (sct=0, sc=8) 00:28:55.250 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 [2024-10-28 05:04:45.294047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:55.251 NVMe io qpair process completion error 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 [2024-10-28 05:04:45.295409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:55.251 starting I/O failed: -6 00:28:55.251 starting I/O failed: -6 00:28:55.251 starting I/O failed: -6 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 [2024-10-28 05:04:45.296543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.251 Write completed with error (sct=0, sc=8) 00:28:55.251 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 [2024-10-28 05:04:45.297732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 [2024-10-28 05:04:45.300064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:55.252 NVMe io qpair process completion error 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 [2024-10-28 05:04:45.301381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.252 starting I/O failed: -6 00:28:55.252 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 [2024-10-28 05:04:45.302468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 [2024-10-28 05:04:45.303627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.253 Write completed with error (sct=0, sc=8) 00:28:55.253 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 starting I/O failed: -6 00:28:55.254 [2024-10-28 05:04:45.307414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:55.254 NVMe io qpair process completion error 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Write completed with error (sct=0, sc=8) 00:28:55.254 Initializing NVMe Controllers 00:28:55.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:55.254 Controller IO queue size 128, less than required. 00:28:55.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:55.254 Controller IO queue size 128, less than required. 00:28:55.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:55.254 Controller IO queue size 128, less than required. 00:28:55.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:55.254 Controller IO queue size 128, less than required. 00:28:55.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:55.254 Controller IO queue size 128, less than required. 00:28:55.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:55.254 Controller IO queue size 128, less than required. 00:28:55.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:55.254 Controller IO queue size 128, less than required. 00:28:55.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:55.254 Controller IO queue size 128, less than required. 00:28:55.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:55.254 Controller IO queue size 128, less than required. 00:28:55.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:55.254 Controller IO queue size 128, less than required. 00:28:55.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:55.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:55.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:55.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:55.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:55.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:55.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:55.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:55.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:55.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:55.254 Initialization complete. Launching workers. 00:28:55.254 ======================================================== 00:28:55.254 Latency(us) 00:28:55.254 Device Information : IOPS MiB/s Average min max 00:28:55.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1764.43 75.82 72569.25 948.00 126174.59 00:28:55.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1824.80 78.41 70205.63 891.75 117918.76 00:28:55.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1816.52 78.05 70565.72 980.10 143694.56 00:28:55.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1869.48 80.33 68596.81 753.29 134617.60 00:28:55.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1753.10 75.33 72352.86 1177.29 117398.53 00:28:55.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1767.48 75.95 72535.80 1142.52 134995.56 00:28:55.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1748.30 75.12 72942.38 910.83 139202.56 00:28:55.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1744.60 74.96 72726.71 1147.83 118298.10 00:28:55.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1746.12 75.03 72689.99 887.77 120832.41 00:28:55.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1753.32 75.34 72422.17 1065.64 116594.46 00:28:55.254 ======================================================== 00:28:55.254 Total : 17788.15 764.33 71729.51 753.29 143694.56 00:28:55.254 00:28:55.254 [2024-10-28 05:04:45.315435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599060 is same with the state(6) to be set 00:28:55.254 [2024-10-28 05:04:45.315545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b3500 is same with the state(6) to be set 00:28:55.254 [2024-10-28 05:04:45.315604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15987f0 is same with the state(6) to be set 00:28:55.254 [2024-10-28 05:04:45.315704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15968f0 is same with the state(6) to be set 00:28:55.254 [2024-10-28 05:04:45.315769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598420 is same with the state(6) to be set 00:28:55.255 [2024-10-28 05:04:45.315826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598b20 is same with the state(6) to be set 00:28:55.255 [2024-10-28 05:04:45.315931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15948a0 is same with the state(6) to be set 00:28:55.255 [2024-10-28 05:04:45.315989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b3320 is same with the state(6) to be set 00:28:55.255 [2024-10-28 05:04:45.316088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1598240 is same with the state(6) to be set 00:28:55.255 [2024-10-28 05:04:45.316186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1594da0 is same with the state(6) to be set 00:28:55.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:55.255 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2406276 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2406276 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 2406276 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:56.212 rmmod nvme_tcp 00:28:56.212 rmmod nvme_fabrics 00:28:56.212 rmmod nvme_keyring 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 2406091 ']' 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 2406091 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 2406091 ']' 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 2406091 00:28:56.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2406091) - No such process 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 2406091 is not found' 00:28:56.212 Process with pid 2406091 is not found 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.212 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.745 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:58.745 00:28:58.745 real 0m10.512s 00:28:58.745 user 0m26.023s 00:28:58.745 sys 0m5.897s 00:28:58.745 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.745 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:58.745 ************************************ 00:28:58.745 END TEST nvmf_shutdown_tc4 00:28:58.745 ************************************ 00:28:58.745 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:28:58.745 00:28:58.745 real 0m39.764s 00:28:58.745 user 1m49.276s 00:28:58.745 sys 0m12.241s 00:28:58.745 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.745 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:58.745 ************************************ 00:28:58.745 END TEST nvmf_shutdown 00:28:58.745 ************************************ 00:28:58.745 05:04:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:58.745 00:28:58.745 real 18m40.124s 00:28:58.745 user 51m50.305s 00:28:58.745 sys 4m0.490s 00:28:58.745 05:04:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.745 05:04:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:58.745 ************************************ 00:28:58.745 END TEST nvmf_target_extra 00:28:58.745 ************************************ 00:28:58.745 05:04:48 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:58.745 05:04:48 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:58.745 05:04:48 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.745 05:04:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:58.745 ************************************ 00:28:58.745 START TEST nvmf_host 00:28:58.745 ************************************ 00:28:58.745 05:04:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:58.745 * Looking for test storage... 00:28:58.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:58.745 05:04:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:28:58.745 05:04:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1689 -- # lcov --version 00:28:58.745 05:04:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:28:58.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.745 --rc genhtml_branch_coverage=1 00:28:58.745 --rc genhtml_function_coverage=1 00:28:58.745 --rc genhtml_legend=1 00:28:58.745 --rc geninfo_all_blocks=1 00:28:58.745 --rc geninfo_unexecuted_blocks=1 00:28:58.745 00:28:58.745 ' 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:28:58.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.745 --rc genhtml_branch_coverage=1 00:28:58.745 --rc genhtml_function_coverage=1 00:28:58.745 --rc genhtml_legend=1 00:28:58.745 --rc geninfo_all_blocks=1 00:28:58.745 --rc geninfo_unexecuted_blocks=1 00:28:58.745 00:28:58.745 ' 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:28:58.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.745 --rc genhtml_branch_coverage=1 00:28:58.745 --rc genhtml_function_coverage=1 00:28:58.745 --rc genhtml_legend=1 00:28:58.745 --rc geninfo_all_blocks=1 00:28:58.745 --rc geninfo_unexecuted_blocks=1 00:28:58.745 00:28:58.745 ' 00:28:58.745 05:04:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:28:58.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.745 --rc genhtml_branch_coverage=1 00:28:58.745 --rc genhtml_function_coverage=1 00:28:58.746 --rc genhtml_legend=1 00:28:58.746 --rc geninfo_all_blocks=1 00:28:58.746 --rc geninfo_unexecuted_blocks=1 00:28:58.746 00:28:58.746 ' 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:58.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.746 ************************************ 00:28:58.746 START TEST nvmf_multicontroller 00:28:58.746 ************************************ 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:58.746 * Looking for test storage... 00:28:58.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1689 -- # lcov --version 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:28:58.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.746 --rc genhtml_branch_coverage=1 00:28:58.746 --rc genhtml_function_coverage=1 00:28:58.746 --rc genhtml_legend=1 00:28:58.746 --rc geninfo_all_blocks=1 00:28:58.746 --rc geninfo_unexecuted_blocks=1 00:28:58.746 00:28:58.746 ' 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:28:58.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.746 --rc genhtml_branch_coverage=1 00:28:58.746 --rc genhtml_function_coverage=1 00:28:58.746 --rc genhtml_legend=1 00:28:58.746 --rc geninfo_all_blocks=1 00:28:58.746 --rc geninfo_unexecuted_blocks=1 00:28:58.746 00:28:58.746 ' 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:28:58.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.746 --rc genhtml_branch_coverage=1 00:28:58.746 --rc genhtml_function_coverage=1 00:28:58.746 --rc genhtml_legend=1 00:28:58.746 --rc geninfo_all_blocks=1 00:28:58.746 --rc geninfo_unexecuted_blocks=1 00:28:58.746 00:28:58.746 ' 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:28:58.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.746 --rc genhtml_branch_coverage=1 00:28:58.746 --rc genhtml_function_coverage=1 00:28:58.746 --rc genhtml_legend=1 00:28:58.746 --rc geninfo_all_blocks=1 00:28:58.746 --rc geninfo_unexecuted_blocks=1 00:28:58.746 00:28:58.746 ' 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.746 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:58.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.747 05:04:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:00.651 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:00.651 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:00.651 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:00.651 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:00.651 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.652 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:00.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:29:00.911 00:29:00.911 --- 10.0.0.2 ping statistics --- 00:29:00.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.911 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:29:00.911 00:29:00.911 --- 10.0.0.1 ping statistics --- 00:29:00.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.911 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=2409034 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 2409034 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2409034 ']' 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:00.911 05:04:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:00.911 [2024-10-28 05:04:51.426401] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:29:00.911 [2024-10-28 05:04:51.426489] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.170 [2024-10-28 05:04:51.564708] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:01.170 [2024-10-28 05:04:51.599604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:01.170 [2024-10-28 05:04:51.645431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.170 [2024-10-28 05:04:51.645514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.170 [2024-10-28 05:04:51.645528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.170 [2024-10-28 05:04:51.645539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.170 [2024-10-28 05:04:51.645548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:01.170 [2024-10-28 05:04:51.647026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:01.170 [2024-10-28 05:04:51.647090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:01.170 [2024-10-28 05:04:51.647093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.105 [2024-10-28 05:04:52.457218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.105 Malloc0 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.105 [2024-10-28 05:04:52.519895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.105 [2024-10-28 05:04:52.527734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:02.105 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.106 Malloc1 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2409190 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2409190 /var/tmp/bdevperf.sock 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2409190 ']' 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:02.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:02.106 05:04:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.040 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:03.040 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:03.040 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:03.040 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.040 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.299 NVMe0n1 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.299 1 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.299 request: 00:29:03.299 { 00:29:03.299 "name": "NVMe0", 00:29:03.299 "trtype": "tcp", 00:29:03.299 "traddr": "10.0.0.2", 00:29:03.299 "adrfam": "ipv4", 00:29:03.299 "trsvcid": "4420", 00:29:03.299 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.299 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:03.299 "hostaddr": "10.0.0.1", 00:29:03.299 "prchk_reftag": false, 00:29:03.299 "prchk_guard": false, 00:29:03.299 "hdgst": false, 00:29:03.299 "ddgst": false, 00:29:03.299 "allow_unrecognized_csi": false, 00:29:03.299 "method": "bdev_nvme_attach_controller", 00:29:03.299 "req_id": 1 00:29:03.299 } 00:29:03.299 Got JSON-RPC error response 00:29:03.299 response: 00:29:03.299 { 00:29:03.299 "code": -114, 00:29:03.299 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:03.299 } 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:03.299 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.300 request: 00:29:03.300 { 00:29:03.300 "name": "NVMe0", 00:29:03.300 "trtype": "tcp", 00:29:03.300 "traddr": "10.0.0.2", 00:29:03.300 "adrfam": "ipv4", 00:29:03.300 "trsvcid": "4420", 00:29:03.300 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:03.300 "hostaddr": "10.0.0.1", 00:29:03.300 "prchk_reftag": false, 00:29:03.300 "prchk_guard": false, 00:29:03.300 "hdgst": false, 00:29:03.300 "ddgst": false, 00:29:03.300 "allow_unrecognized_csi": false, 00:29:03.300 "method": "bdev_nvme_attach_controller", 00:29:03.300 "req_id": 1 00:29:03.300 } 00:29:03.300 Got JSON-RPC error response 00:29:03.300 response: 00:29:03.300 { 00:29:03.300 "code": -114, 00:29:03.300 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:03.300 } 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.300 request: 00:29:03.300 { 00:29:03.300 "name": "NVMe0", 00:29:03.300 "trtype": "tcp", 00:29:03.300 "traddr": "10.0.0.2", 00:29:03.300 "adrfam": "ipv4", 00:29:03.300 "trsvcid": "4420", 00:29:03.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.300 "hostaddr": "10.0.0.1", 00:29:03.300 "prchk_reftag": false, 00:29:03.300 "prchk_guard": false, 00:29:03.300 "hdgst": false, 00:29:03.300 "ddgst": false, 00:29:03.300 "multipath": "disable", 00:29:03.300 "allow_unrecognized_csi": false, 00:29:03.300 "method": "bdev_nvme_attach_controller", 00:29:03.300 "req_id": 1 00:29:03.300 } 00:29:03.300 Got JSON-RPC error response 00:29:03.300 response: 00:29:03.300 { 00:29:03.300 "code": -114, 00:29:03.300 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:03.300 } 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.300 request: 00:29:03.300 { 00:29:03.300 "name": "NVMe0", 00:29:03.300 "trtype": "tcp", 00:29:03.300 "traddr": "10.0.0.2", 00:29:03.300 "adrfam": "ipv4", 00:29:03.300 "trsvcid": "4420", 00:29:03.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.300 "hostaddr": "10.0.0.1", 00:29:03.300 "prchk_reftag": false, 00:29:03.300 "prchk_guard": false, 00:29:03.300 "hdgst": false, 00:29:03.300 "ddgst": false, 00:29:03.300 "multipath": "failover", 00:29:03.300 "allow_unrecognized_csi": false, 00:29:03.300 "method": "bdev_nvme_attach_controller", 00:29:03.300 "req_id": 1 00:29:03.300 } 00:29:03.300 Got JSON-RPC error response 00:29:03.300 response: 00:29:03.300 { 00:29:03.300 "code": -114, 00:29:03.300 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:03.300 } 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.300 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.559 NVMe0n1 00:29:03.559 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.559 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:03.559 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.559 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.559 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.559 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:03.559 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.559 05:04:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.817 00:29:03.817 05:04:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.817 05:04:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:03.817 05:04:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.817 05:04:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:03.817 05:04:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.817 05:04:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.817 05:04:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:03.817 05:04:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:04.752 { 00:29:04.752 "results": [ 00:29:04.752 { 00:29:04.752 "job": "NVMe0n1", 00:29:04.752 "core_mask": "0x1", 00:29:04.752 "workload": "write", 00:29:04.752 "status": "finished", 00:29:04.752 "queue_depth": 128, 00:29:04.752 "io_size": 4096, 00:29:04.752 "runtime": 1.009325, 00:29:04.752 "iops": 16885.542317885716, 00:29:04.752 "mibps": 65.95914967924108, 00:29:04.752 "io_failed": 0, 00:29:04.752 "io_timeout": 0, 00:29:04.752 "avg_latency_us": 7546.921310265115, 00:29:04.752 "min_latency_us": 4768.908854650083, 00:29:04.752 "max_latency_us": 12554.882494895119 00:29:04.752 } 00:29:04.752 ], 00:29:04.752 "core_count": 1 00:29:04.752 } 00:29:04.752 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:04.752 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.752 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:04.752 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.752 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:04.752 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2409190 00:29:04.752 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2409190 ']' 00:29:04.752 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2409190 00:29:04.752 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2409190 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2409190' 00:29:05.012 killing process with pid 2409190 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2409190 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2409190 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1595 -- # read -r file 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1594 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1594 -- # sort -u 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # cat 00:29:05.012 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:05.012 [2024-10-28 05:04:52.636883] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:29:05.012 [2024-10-28 05:04:52.637003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409190 ] 00:29:05.012 [2024-10-28 05:04:52.771032] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:05.012 [2024-10-28 05:04:52.808500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.012 [2024-10-28 05:04:52.854581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.012 [2024-10-28 05:04:54.160815] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 37e6fef5-49f2-4ab8-81a9-21c2bd1e3e55 already exists 00:29:05.012 [2024-10-28 05:04:54.160860] bdev.c:7836:bdev_register: *ERROR*: Unable to add uuid:37e6fef5-49f2-4ab8-81a9-21c2bd1e3e55 alias for bdev NVMe1n1 00:29:05.012 [2024-10-28 05:04:54.160877] bdev_nvme.c:4604:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:05.012 Running I/O for 1 seconds... 00:29:05.012 16883.00 IOPS, 65.95 MiB/s 00:29:05.012 Latency(us) 00:29:05.012 [2024-10-28T04:04:55.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.012 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:05.012 NVMe0n1 : 1.01 16885.54 65.96 0.00 0.00 7546.92 4768.91 12554.88 00:29:05.012 [2024-10-28T04:04:55.608Z] =================================================================================================================== 00:29:05.012 [2024-10-28T04:04:55.608Z] Total : 16885.54 65.96 0.00 0.00 7546.92 4768.91 12554.88 00:29:05.012 Received shutdown signal, test time was about 1.000000 seconds 00:29:05.012 00:29:05.012 Latency(us) 00:29:05.012 [2024-10-28T04:04:55.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.012 [2024-10-28T04:04:55.608Z] =================================================================================================================== 00:29:05.012 [2024-10-28T04:04:55.608Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.012 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1601 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1595 -- # read -r file 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:05.012 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:05.012 rmmod nvme_tcp 00:29:05.271 rmmod nvme_fabrics 00:29:05.271 rmmod nvme_keyring 00:29:05.271 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:05.271 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:05.271 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:05.271 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 2409034 ']' 00:29:05.271 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 2409034 00:29:05.271 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2409034 ']' 00:29:05.271 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2409034 00:29:05.271 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:05.271 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:05.271 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2409034 00:29:05.271 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:05.271 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:05.271 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2409034' 00:29:05.271 killing process with pid 2409034 00:29:05.271 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2409034 00:29:05.271 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2409034 00:29:05.531 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:05.531 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:05.531 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:05.531 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:05.531 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:29:05.531 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:05.531 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:29:05.531 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:05.531 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:05.531 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.531 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.531 05:04:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.434 05:04:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.434 00:29:07.434 real 0m8.886s 00:29:07.434 user 0m17.050s 00:29:07.434 sys 0m2.326s 00:29:07.434 05:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:07.434 05:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.434 ************************************ 00:29:07.434 END TEST nvmf_multicontroller 00:29:07.434 ************************************ 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.694 ************************************ 00:29:07.694 START TEST nvmf_aer 00:29:07.694 ************************************ 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:07.694 * Looking for test storage... 00:29:07.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1689 -- # lcov --version 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:07.694 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:29:07.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.695 --rc genhtml_branch_coverage=1 00:29:07.695 --rc genhtml_function_coverage=1 00:29:07.695 --rc genhtml_legend=1 00:29:07.695 --rc geninfo_all_blocks=1 00:29:07.695 --rc geninfo_unexecuted_blocks=1 00:29:07.695 00:29:07.695 ' 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:29:07.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.695 --rc genhtml_branch_coverage=1 00:29:07.695 --rc genhtml_function_coverage=1 00:29:07.695 --rc genhtml_legend=1 00:29:07.695 --rc geninfo_all_blocks=1 00:29:07.695 --rc geninfo_unexecuted_blocks=1 00:29:07.695 00:29:07.695 ' 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:29:07.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.695 --rc genhtml_branch_coverage=1 00:29:07.695 --rc genhtml_function_coverage=1 00:29:07.695 --rc genhtml_legend=1 00:29:07.695 --rc geninfo_all_blocks=1 00:29:07.695 --rc geninfo_unexecuted_blocks=1 00:29:07.695 00:29:07.695 ' 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:29:07.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.695 --rc genhtml_branch_coverage=1 00:29:07.695 --rc genhtml_function_coverage=1 00:29:07.695 --rc genhtml_legend=1 00:29:07.695 --rc geninfo_all_blocks=1 00:29:07.695 --rc geninfo_unexecuted_blocks=1 00:29:07.695 00:29:07.695 ' 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:07.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.695 05:04:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.598 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:09.599 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:09.599 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:09.599 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:09.599 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.599 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.858 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.858 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.858 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:09.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:29:09.859 00:29:09.859 --- 10.0.0.2 ping statistics --- 00:29:09.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.859 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:29:09.859 00:29:09.859 --- 10.0.0.1 ping statistics --- 00:29:09.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.859 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=2411508 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 2411508 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2411508 ']' 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:09.859 05:05:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.859 [2024-10-28 05:05:00.386414] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:29:09.859 [2024-10-28 05:05:00.386520] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.117 [2024-10-28 05:05:00.528263] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:10.117 [2024-10-28 05:05:00.571434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:10.117 [2024-10-28 05:05:00.624808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.117 [2024-10-28 05:05:00.624863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.117 [2024-10-28 05:05:00.624887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.117 [2024-10-28 05:05:00.624901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.117 [2024-10-28 05:05:00.624913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.117 [2024-10-28 05:05:00.626662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.117 [2024-10-28 05:05:00.626729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:10.117 [2024-10-28 05:05:00.626778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:10.117 [2024-10-28 05:05:00.626781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.049 [2024-10-28 05:05:01.415951] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.049 Malloc0 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.049 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.050 [2024-10-28 05:05:01.478678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.050 [ 00:29:11.050 { 00:29:11.050 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:11.050 "subtype": "Discovery", 00:29:11.050 "listen_addresses": [], 00:29:11.050 "allow_any_host": true, 00:29:11.050 "hosts": [] 00:29:11.050 }, 00:29:11.050 { 00:29:11.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.050 "subtype": "NVMe", 00:29:11.050 "listen_addresses": [ 00:29:11.050 { 00:29:11.050 "trtype": "TCP", 00:29:11.050 "adrfam": "IPv4", 00:29:11.050 "traddr": "10.0.0.2", 00:29:11.050 "trsvcid": "4420" 00:29:11.050 } 00:29:11.050 ], 00:29:11.050 "allow_any_host": true, 00:29:11.050 "hosts": [], 00:29:11.050 "serial_number": "SPDK00000000000001", 00:29:11.050 "model_number": "SPDK bdev Controller", 00:29:11.050 "max_namespaces": 2, 00:29:11.050 "min_cntlid": 1, 00:29:11.050 "max_cntlid": 65519, 00:29:11.050 "namespaces": [ 00:29:11.050 { 00:29:11.050 "nsid": 1, 00:29:11.050 "bdev_name": "Malloc0", 00:29:11.050 "name": "Malloc0", 00:29:11.050 "nguid": "8466155BDA8F4B9E92443B530CADBC0F", 00:29:11.050 "uuid": "8466155b-da8f-4b9e-9244-3b530cadbc0f" 00:29:11.050 } 00:29:11.050 ] 00:29:11.050 } 00:29:11.050 ] 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2411662 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:29:11.050 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:11.307 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:11.307 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:29:11.307 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:29:11.307 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:11.307 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:11.307 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 3 -lt 200 ']' 00:29:11.307 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=4 00:29:11.307 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.565 Malloc1 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.565 Asynchronous Event Request test 00:29:11.565 Attaching to 10.0.0.2 00:29:11.565 Attached to 10.0.0.2 00:29:11.565 Registering asynchronous event callbacks... 00:29:11.565 Starting namespace attribute notice tests for all controllers... 00:29:11.565 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:11.565 aer_cb - Changed Namespace 00:29:11.565 Cleaning up... 00:29:11.565 [ 00:29:11.565 { 00:29:11.565 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:11.565 "subtype": "Discovery", 00:29:11.565 "listen_addresses": [], 00:29:11.565 "allow_any_host": true, 00:29:11.565 "hosts": [] 00:29:11.565 }, 00:29:11.565 { 00:29:11.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.565 "subtype": "NVMe", 00:29:11.565 "listen_addresses": [ 00:29:11.565 { 00:29:11.565 "trtype": "TCP", 00:29:11.565 "adrfam": "IPv4", 00:29:11.565 "traddr": "10.0.0.2", 00:29:11.565 "trsvcid": "4420" 00:29:11.565 } 00:29:11.565 ], 00:29:11.565 "allow_any_host": true, 00:29:11.565 "hosts": [], 00:29:11.565 "serial_number": "SPDK00000000000001", 00:29:11.565 "model_number": "SPDK bdev Controller", 00:29:11.565 "max_namespaces": 2, 00:29:11.565 "min_cntlid": 1, 00:29:11.565 "max_cntlid": 65519, 00:29:11.565 "namespaces": [ 00:29:11.565 { 00:29:11.565 "nsid": 1, 00:29:11.565 "bdev_name": "Malloc0", 00:29:11.565 "name": "Malloc0", 00:29:11.565 "nguid": "8466155BDA8F4B9E92443B530CADBC0F", 00:29:11.565 "uuid": "8466155b-da8f-4b9e-9244-3b530cadbc0f" 00:29:11.565 }, 00:29:11.565 { 00:29:11.565 "nsid": 2, 00:29:11.565 "bdev_name": "Malloc1", 00:29:11.565 "name": "Malloc1", 00:29:11.565 "nguid": "A1C8309DFA7647A4869410ECD33EB190", 00:29:11.565 "uuid": "a1c8309d-fa76-47a4-8694-10ecd33eb190" 00:29:11.565 } 00:29:11.565 ] 00:29:11.565 } 00:29:11.565 ] 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2411662 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.565 05:05:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:11.565 rmmod nvme_tcp 00:29:11.565 rmmod nvme_fabrics 00:29:11.565 rmmod nvme_keyring 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 2411508 ']' 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 2411508 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2411508 ']' 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2411508 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2411508 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2411508' 00:29:11.565 killing process with pid 2411508 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2411508 00:29:11.565 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2411508 00:29:11.825 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:11.825 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:11.825 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:11.825 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:11.825 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:29:11.825 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:11.825 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:29:11.825 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:11.825 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:11.825 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.825 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.825 05:05:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.471 00:29:14.471 real 0m6.355s 00:29:14.471 user 0m7.903s 00:29:14.471 sys 0m1.963s 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.471 ************************************ 00:29:14.471 END TEST nvmf_aer 00:29:14.471 ************************************ 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.471 ************************************ 00:29:14.471 START TEST nvmf_async_init 00:29:14.471 ************************************ 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:14.471 * Looking for test storage... 00:29:14.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1689 -- # lcov --version 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:29:14.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.471 --rc genhtml_branch_coverage=1 00:29:14.471 --rc genhtml_function_coverage=1 00:29:14.471 --rc genhtml_legend=1 00:29:14.471 --rc geninfo_all_blocks=1 00:29:14.471 --rc geninfo_unexecuted_blocks=1 00:29:14.471 00:29:14.471 ' 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:29:14.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.471 --rc genhtml_branch_coverage=1 00:29:14.471 --rc genhtml_function_coverage=1 00:29:14.471 --rc genhtml_legend=1 00:29:14.471 --rc geninfo_all_blocks=1 00:29:14.471 --rc geninfo_unexecuted_blocks=1 00:29:14.471 00:29:14.471 ' 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:29:14.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.471 --rc genhtml_branch_coverage=1 00:29:14.471 --rc genhtml_function_coverage=1 00:29:14.471 --rc genhtml_legend=1 00:29:14.471 --rc geninfo_all_blocks=1 00:29:14.471 --rc geninfo_unexecuted_blocks=1 00:29:14.471 00:29:14.471 ' 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:29:14.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.471 --rc genhtml_branch_coverage=1 00:29:14.471 --rc genhtml_function_coverage=1 00:29:14.471 --rc genhtml_legend=1 00:29:14.471 --rc geninfo_all_blocks=1 00:29:14.471 --rc geninfo_unexecuted_blocks=1 00:29:14.471 00:29:14.471 ' 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.471 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:14.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a24b66f3ce734d209b1be7bcb4c108aa 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.472 05:05:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:16.373 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:16.374 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:16.374 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:16.374 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:16.374 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:16.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:16.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:29:16.374 00:29:16.374 --- 10.0.0.2 ping statistics --- 00:29:16.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.374 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:16.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:16.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:29:16.374 00:29:16.374 --- 10.0.0.1 ping statistics --- 00:29:16.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.374 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=2413624 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 2413624 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2413624 ']' 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:16.374 05:05:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.374 [2024-10-28 05:05:06.820381] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:29:16.374 [2024-10-28 05:05:06.820459] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.375 [2024-10-28 05:05:06.961003] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:16.633 [2024-10-28 05:05:06.998663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.633 [2024-10-28 05:05:07.046500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.633 [2024-10-28 05:05:07.046557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.633 [2024-10-28 05:05:07.046586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.633 [2024-10-28 05:05:07.046598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.633 [2024-10-28 05:05:07.046608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.633 [2024-10-28 05:05:07.047238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.567 [2024-10-28 05:05:07.856416] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.567 null0 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a24b66f3ce734d209b1be7bcb4c108aa 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.567 [2024-10-28 05:05:07.896559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.567 05:05:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.567 nvme0n1 00:29:17.567 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.567 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:17.567 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.567 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.567 [ 00:29:17.567 { 00:29:17.567 "name": "nvme0n1", 00:29:17.567 "aliases": [ 00:29:17.567 "a24b66f3-ce73-4d20-9b1b-e7bcb4c108aa" 00:29:17.567 ], 00:29:17.567 "product_name": "NVMe disk", 00:29:17.567 "block_size": 512, 00:29:17.567 "num_blocks": 2097152, 00:29:17.567 "uuid": "a24b66f3-ce73-4d20-9b1b-e7bcb4c108aa", 00:29:17.567 "numa_id": 0, 00:29:17.567 "assigned_rate_limits": { 00:29:17.567 "rw_ios_per_sec": 0, 00:29:17.567 "rw_mbytes_per_sec": 0, 00:29:17.567 "r_mbytes_per_sec": 0, 00:29:17.567 "w_mbytes_per_sec": 0 00:29:17.567 }, 00:29:17.567 "claimed": false, 00:29:17.567 "zoned": false, 00:29:17.567 "supported_io_types": { 00:29:17.567 "read": true, 00:29:17.567 "write": true, 00:29:17.567 "unmap": false, 00:29:17.567 "flush": true, 00:29:17.567 "reset": true, 00:29:17.567 "nvme_admin": true, 00:29:17.567 "nvme_io": true, 00:29:17.567 "nvme_io_md": false, 00:29:17.567 "write_zeroes": true, 00:29:17.567 "zcopy": false, 00:29:17.567 "get_zone_info": false, 00:29:17.567 "zone_management": false, 00:29:17.567 "zone_append": false, 00:29:17.567 "compare": true, 00:29:17.567 "compare_and_write": true, 00:29:17.567 "abort": true, 00:29:17.567 "seek_hole": false, 00:29:17.567 "seek_data": false, 00:29:17.567 "copy": true, 00:29:17.567 "nvme_iov_md": false 00:29:17.567 }, 00:29:17.567 "memory_domains": [ 00:29:17.567 { 00:29:17.567 "dma_device_id": "system", 00:29:17.568 "dma_device_type": 1 00:29:17.568 } 00:29:17.568 ], 00:29:17.568 "driver_specific": { 00:29:17.568 "nvme": [ 00:29:17.568 { 00:29:17.568 "trid": { 00:29:17.568 "trtype": "TCP", 00:29:17.568 "adrfam": "IPv4", 00:29:17.568 "traddr": "10.0.0.2", 00:29:17.568 "trsvcid": "4420", 00:29:17.568 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:17.568 }, 00:29:17.568 "ctrlr_data": { 00:29:17.568 "cntlid": 1, 00:29:17.568 "vendor_id": "0x8086", 00:29:17.568 "model_number": "SPDK bdev Controller", 00:29:17.568 "serial_number": "00000000000000000000", 00:29:17.568 "firmware_revision": "25.01", 00:29:17.568 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:17.568 "oacs": { 00:29:17.568 "security": 0, 00:29:17.568 "format": 0, 00:29:17.568 "firmware": 0, 00:29:17.568 "ns_manage": 0 00:29:17.568 }, 00:29:17.568 "multi_ctrlr": true, 00:29:17.568 "ana_reporting": false 00:29:17.568 }, 00:29:17.568 "vs": { 00:29:17.568 "nvme_version": "1.3" 00:29:17.568 }, 00:29:17.568 "ns_data": { 00:29:17.568 "id": 1, 00:29:17.568 "can_share": true 00:29:17.568 } 00:29:17.568 } 00:29:17.568 ], 00:29:17.568 "mp_policy": "active_passive" 00:29:17.568 } 00:29:17.568 } 00:29:17.568 ] 00:29:17.568 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.568 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:17.568 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.568 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.568 [2024-10-28 05:05:08.145509] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:17.568 [2024-10-28 05:05:08.145605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2214de0 (9): Bad file descriptor 00:29:17.826 [2024-10-28 05:05:08.277796] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:17.826 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.826 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:17.826 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.826 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.826 [ 00:29:17.826 { 00:29:17.826 "name": "nvme0n1", 00:29:17.826 "aliases": [ 00:29:17.826 "a24b66f3-ce73-4d20-9b1b-e7bcb4c108aa" 00:29:17.826 ], 00:29:17.827 "product_name": "NVMe disk", 00:29:17.827 "block_size": 512, 00:29:17.827 "num_blocks": 2097152, 00:29:17.827 "uuid": "a24b66f3-ce73-4d20-9b1b-e7bcb4c108aa", 00:29:17.827 "numa_id": 0, 00:29:17.827 "assigned_rate_limits": { 00:29:17.827 "rw_ios_per_sec": 0, 00:29:17.827 "rw_mbytes_per_sec": 0, 00:29:17.827 "r_mbytes_per_sec": 0, 00:29:17.827 "w_mbytes_per_sec": 0 00:29:17.827 }, 00:29:17.827 "claimed": false, 00:29:17.827 "zoned": false, 00:29:17.827 "supported_io_types": { 00:29:17.827 "read": true, 00:29:17.827 "write": true, 00:29:17.827 "unmap": false, 00:29:17.827 "flush": true, 00:29:17.827 "reset": true, 00:29:17.827 "nvme_admin": true, 00:29:17.827 "nvme_io": true, 00:29:17.827 "nvme_io_md": false, 00:29:17.827 "write_zeroes": true, 00:29:17.827 "zcopy": false, 00:29:17.827 "get_zone_info": false, 00:29:17.827 "zone_management": false, 00:29:17.827 "zone_append": false, 00:29:17.827 "compare": true, 00:29:17.827 "compare_and_write": true, 00:29:17.827 "abort": true, 00:29:17.827 "seek_hole": false, 00:29:17.827 "seek_data": false, 00:29:17.827 "copy": true, 00:29:17.827 "nvme_iov_md": false 00:29:17.827 }, 00:29:17.827 "memory_domains": [ 00:29:17.827 { 00:29:17.827 "dma_device_id": "system", 00:29:17.827 "dma_device_type": 1 00:29:17.827 } 00:29:17.827 ], 00:29:17.827 "driver_specific": { 00:29:17.827 "nvme": [ 00:29:17.827 { 00:29:17.827 "trid": { 00:29:17.827 "trtype": "TCP", 00:29:17.827 "adrfam": "IPv4", 00:29:17.827 "traddr": "10.0.0.2", 00:29:17.827 "trsvcid": "4420", 00:29:17.827 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:17.827 }, 00:29:17.827 "ctrlr_data": { 00:29:17.827 "cntlid": 2, 00:29:17.827 "vendor_id": "0x8086", 00:29:17.827 "model_number": "SPDK bdev Controller", 00:29:17.827 "serial_number": "00000000000000000000", 00:29:17.827 "firmware_revision": "25.01", 00:29:17.827 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:17.827 "oacs": { 00:29:17.827 "security": 0, 00:29:17.827 "format": 0, 00:29:17.827 "firmware": 0, 00:29:17.827 "ns_manage": 0 00:29:17.827 }, 00:29:17.827 "multi_ctrlr": true, 00:29:17.827 "ana_reporting": false 00:29:17.827 }, 00:29:17.827 "vs": { 00:29:17.827 "nvme_version": "1.3" 00:29:17.827 }, 00:29:17.827 "ns_data": { 00:29:17.827 "id": 1, 00:29:17.827 "can_share": true 00:29:17.827 } 00:29:17.827 } 00:29:17.827 ], 00:29:17.827 "mp_policy": "active_passive" 00:29:17.827 } 00:29:17.827 } 00:29:17.827 ] 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.LKIMNpxsDu 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.LKIMNpxsDu 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.LKIMNpxsDu 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.827 [2024-10-28 05:05:08.333731] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:17.827 [2024-10-28 05:05:08.333862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.827 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.827 [2024-10-28 05:05:08.349741] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:18.086 nvme0n1 00:29:18.086 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.086 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:18.086 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.086 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.086 [ 00:29:18.086 { 00:29:18.086 "name": "nvme0n1", 00:29:18.086 "aliases": [ 00:29:18.086 "a24b66f3-ce73-4d20-9b1b-e7bcb4c108aa" 00:29:18.086 ], 00:29:18.086 "product_name": "NVMe disk", 00:29:18.086 "block_size": 512, 00:29:18.086 "num_blocks": 2097152, 00:29:18.086 "uuid": "a24b66f3-ce73-4d20-9b1b-e7bcb4c108aa", 00:29:18.086 "numa_id": 0, 00:29:18.086 "assigned_rate_limits": { 00:29:18.086 "rw_ios_per_sec": 0, 00:29:18.086 "rw_mbytes_per_sec": 0, 00:29:18.086 "r_mbytes_per_sec": 0, 00:29:18.086 "w_mbytes_per_sec": 0 00:29:18.086 }, 00:29:18.086 "claimed": false, 00:29:18.086 "zoned": false, 00:29:18.086 "supported_io_types": { 00:29:18.086 "read": true, 00:29:18.086 "write": true, 00:29:18.086 "unmap": false, 00:29:18.086 "flush": true, 00:29:18.086 "reset": true, 00:29:18.086 "nvme_admin": true, 00:29:18.086 "nvme_io": true, 00:29:18.086 "nvme_io_md": false, 00:29:18.086 "write_zeroes": true, 00:29:18.086 "zcopy": false, 00:29:18.086 "get_zone_info": false, 00:29:18.086 "zone_management": false, 00:29:18.086 "zone_append": false, 00:29:18.086 "compare": true, 00:29:18.086 "compare_and_write": true, 00:29:18.086 "abort": true, 00:29:18.086 "seek_hole": false, 00:29:18.086 "seek_data": false, 00:29:18.086 "copy": true, 00:29:18.086 "nvme_iov_md": false 00:29:18.086 }, 00:29:18.086 "memory_domains": [ 00:29:18.086 { 00:29:18.086 "dma_device_id": "system", 00:29:18.086 "dma_device_type": 1 00:29:18.086 } 00:29:18.086 ], 00:29:18.086 "driver_specific": { 00:29:18.086 "nvme": [ 00:29:18.086 { 00:29:18.086 "trid": { 00:29:18.086 "trtype": "TCP", 00:29:18.086 "adrfam": "IPv4", 00:29:18.086 "traddr": "10.0.0.2", 00:29:18.086 "trsvcid": "4421", 00:29:18.086 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:18.086 }, 00:29:18.086 "ctrlr_data": { 00:29:18.086 "cntlid": 3, 00:29:18.086 "vendor_id": "0x8086", 00:29:18.086 "model_number": "SPDK bdev Controller", 00:29:18.086 "serial_number": "00000000000000000000", 00:29:18.086 "firmware_revision": "25.01", 00:29:18.086 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:18.086 "oacs": { 00:29:18.086 "security": 0, 00:29:18.086 "format": 0, 00:29:18.086 "firmware": 0, 00:29:18.086 "ns_manage": 0 00:29:18.086 }, 00:29:18.086 "multi_ctrlr": true, 00:29:18.086 "ana_reporting": false 00:29:18.086 }, 00:29:18.086 "vs": { 00:29:18.086 "nvme_version": "1.3" 00:29:18.086 }, 00:29:18.086 "ns_data": { 00:29:18.086 "id": 1, 00:29:18.086 "can_share": true 00:29:18.086 } 00:29:18.086 } 00:29:18.086 ], 00:29:18.086 "mp_policy": "active_passive" 00:29:18.086 } 00:29:18.086 } 00:29:18.086 ] 00:29:18.086 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.LKIMNpxsDu 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:18.087 rmmod nvme_tcp 00:29:18.087 rmmod nvme_fabrics 00:29:18.087 rmmod nvme_keyring 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 2413624 ']' 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 2413624 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2413624 ']' 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2413624 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2413624 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2413624' 00:29:18.087 killing process with pid 2413624 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2413624 00:29:18.087 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2413624 00:29:18.345 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:18.345 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:18.345 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:18.345 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:18.345 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:29:18.345 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:18.345 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:29:18.345 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:18.345 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:18.345 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.345 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.345 05:05:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.251 05:05:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:20.251 00:29:20.251 real 0m6.309s 00:29:20.251 user 0m3.027s 00:29:20.251 sys 0m1.873s 00:29:20.251 05:05:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:20.251 05:05:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:20.251 ************************************ 00:29:20.251 END TEST nvmf_async_init 00:29:20.251 ************************************ 00:29:20.251 05:05:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:20.251 05:05:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:20.251 05:05:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:20.251 05:05:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.251 ************************************ 00:29:20.251 START TEST dma 00:29:20.251 ************************************ 00:29:20.251 05:05:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:20.509 * Looking for test storage... 00:29:20.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:20.509 05:05:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:29:20.509 05:05:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1689 -- # lcov --version 00:29:20.509 05:05:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:29:20.509 05:05:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:29:20.509 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.509 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.509 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.509 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.509 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.509 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.509 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:29:20.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.510 --rc genhtml_branch_coverage=1 00:29:20.510 --rc genhtml_function_coverage=1 00:29:20.510 --rc genhtml_legend=1 00:29:20.510 --rc geninfo_all_blocks=1 00:29:20.510 --rc geninfo_unexecuted_blocks=1 00:29:20.510 00:29:20.510 ' 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:29:20.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.510 --rc genhtml_branch_coverage=1 00:29:20.510 --rc genhtml_function_coverage=1 00:29:20.510 --rc genhtml_legend=1 00:29:20.510 --rc geninfo_all_blocks=1 00:29:20.510 --rc geninfo_unexecuted_blocks=1 00:29:20.510 00:29:20.510 ' 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:29:20.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.510 --rc genhtml_branch_coverage=1 00:29:20.510 --rc genhtml_function_coverage=1 00:29:20.510 --rc genhtml_legend=1 00:29:20.510 --rc geninfo_all_blocks=1 00:29:20.510 --rc geninfo_unexecuted_blocks=1 00:29:20.510 00:29:20.510 ' 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:29:20.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.510 --rc genhtml_branch_coverage=1 00:29:20.510 --rc genhtml_function_coverage=1 00:29:20.510 --rc genhtml_legend=1 00:29:20.510 --rc geninfo_all_blocks=1 00:29:20.510 --rc geninfo_unexecuted_blocks=1 00:29:20.510 00:29:20.510 ' 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:20.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:20.510 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:20.511 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:20.511 05:05:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:20.511 05:05:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:20.511 05:05:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:20.511 00:29:20.511 real 0m0.152s 00:29:20.511 user 0m0.102s 00:29:20.511 sys 0m0.059s 00:29:20.511 05:05:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:20.511 05:05:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:20.511 ************************************ 00:29:20.511 END TEST dma 00:29:20.511 ************************************ 00:29:20.511 05:05:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:20.511 05:05:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:20.511 05:05:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:20.511 05:05:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.511 ************************************ 00:29:20.511 START TEST nvmf_identify 00:29:20.511 ************************************ 00:29:20.511 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:20.511 * Looking for test storage... 00:29:20.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:20.511 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:29:20.511 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1689 -- # lcov --version 00:29:20.511 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:29:20.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.770 --rc genhtml_branch_coverage=1 00:29:20.770 --rc genhtml_function_coverage=1 00:29:20.770 --rc genhtml_legend=1 00:29:20.770 --rc geninfo_all_blocks=1 00:29:20.770 --rc geninfo_unexecuted_blocks=1 00:29:20.770 00:29:20.770 ' 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:29:20.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.770 --rc genhtml_branch_coverage=1 00:29:20.770 --rc genhtml_function_coverage=1 00:29:20.770 --rc genhtml_legend=1 00:29:20.770 --rc geninfo_all_blocks=1 00:29:20.770 --rc geninfo_unexecuted_blocks=1 00:29:20.770 00:29:20.770 ' 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:29:20.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.770 --rc genhtml_branch_coverage=1 00:29:20.770 --rc genhtml_function_coverage=1 00:29:20.770 --rc genhtml_legend=1 00:29:20.770 --rc geninfo_all_blocks=1 00:29:20.770 --rc geninfo_unexecuted_blocks=1 00:29:20.770 00:29:20.770 ' 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:29:20.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.770 --rc genhtml_branch_coverage=1 00:29:20.770 --rc genhtml_function_coverage=1 00:29:20.770 --rc genhtml_legend=1 00:29:20.770 --rc geninfo_all_blocks=1 00:29:20.770 --rc geninfo_unexecuted_blocks=1 00:29:20.770 00:29:20.770 ' 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.770 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:20.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:20.771 05:05:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:22.673 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:22.673 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:22.673 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:22.673 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:22.673 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.674 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:22.674 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:22.674 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:22.674 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:22.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:29:22.674 00:29:22.674 --- 10.0.0.2 ping statistics --- 00:29:22.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.674 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:29:22.674 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:22.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:29:22.674 00:29:22.674 --- 10.0.0.1 ping statistics --- 00:29:22.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.674 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:29:22.674 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.674 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:29:22.674 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2415848 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2415848 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2415848 ']' 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:22.931 05:05:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:22.931 [2024-10-28 05:05:13.339104] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:29:22.931 [2024-10-28 05:05:13.339180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.931 [2024-10-28 05:05:13.476916] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:22.931 [2024-10-28 05:05:13.512861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:23.188 [2024-10-28 05:05:13.562575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.188 [2024-10-28 05:05:13.562656] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.188 [2024-10-28 05:05:13.562692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.188 [2024-10-28 05:05:13.562704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.189 [2024-10-28 05:05:13.562714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.189 [2024-10-28 05:05:13.564383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.189 [2024-10-28 05:05:13.564450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:23.189 [2024-10-28 05:05:13.564539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:23.189 [2024-10-28 05:05:13.564541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.121 [2024-10-28 05:05:14.382860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.121 Malloc0 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.121 [2024-10-28 05:05:14.481526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.121 [ 00:29:24.121 { 00:29:24.121 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:24.121 "subtype": "Discovery", 00:29:24.121 "listen_addresses": [ 00:29:24.121 { 00:29:24.121 "trtype": "TCP", 00:29:24.121 "adrfam": "IPv4", 00:29:24.121 "traddr": "10.0.0.2", 00:29:24.121 "trsvcid": "4420" 00:29:24.121 } 00:29:24.121 ], 00:29:24.121 "allow_any_host": true, 00:29:24.121 "hosts": [] 00:29:24.121 }, 00:29:24.121 { 00:29:24.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.121 "subtype": "NVMe", 00:29:24.121 "listen_addresses": [ 00:29:24.121 { 00:29:24.121 "trtype": "TCP", 00:29:24.121 "adrfam": "IPv4", 00:29:24.121 "traddr": "10.0.0.2", 00:29:24.121 "trsvcid": "4420" 00:29:24.121 } 00:29:24.121 ], 00:29:24.121 "allow_any_host": true, 00:29:24.121 "hosts": [], 00:29:24.121 "serial_number": "SPDK00000000000001", 00:29:24.121 "model_number": "SPDK bdev Controller", 00:29:24.121 "max_namespaces": 32, 00:29:24.121 "min_cntlid": 1, 00:29:24.121 "max_cntlid": 65519, 00:29:24.121 "namespaces": [ 00:29:24.121 { 00:29:24.121 "nsid": 1, 00:29:24.121 "bdev_name": "Malloc0", 00:29:24.121 "name": "Malloc0", 00:29:24.121 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:24.121 "eui64": "ABCDEF0123456789", 00:29:24.121 "uuid": "d6fb1c29-558b-45f4-b6a5-220fcc4f982d" 00:29:24.121 } 00:29:24.121 ] 00:29:24.121 } 00:29:24.121 ] 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.121 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:24.121 [2024-10-28 05:05:14.524081] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:29:24.121 [2024-10-28 05:05:14.524128] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2415999 ] 00:29:24.121 [2024-10-28 05:05:14.643381] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:24.121 [2024-10-28 05:05:14.677726] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:24.121 [2024-10-28 05:05:14.677797] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:24.121 [2024-10-28 05:05:14.677808] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:24.121 [2024-10-28 05:05:14.677824] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:24.121 [2024-10-28 05:05:14.677838] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:24.121 [2024-10-28 05:05:14.678544] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:24.121 [2024-10-28 05:05:14.678611] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2060df0 0 00:29:24.121 [2024-10-28 05:05:14.688649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:24.121 [2024-10-28 05:05:14.688675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:24.121 [2024-10-28 05:05:14.688685] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:24.121 [2024-10-28 05:05:14.688691] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:24.121 [2024-10-28 05:05:14.688729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.688742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.688749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2060df0) 00:29:24.122 [2024-10-28 05:05:14.688767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:24.122 [2024-10-28 05:05:14.688795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cea40, cid 0, qid 0 00:29:24.122 [2024-10-28 05:05:14.696666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.122 [2024-10-28 05:05:14.696687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.122 [2024-10-28 05:05:14.696694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.696707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cea40) on tqpair=0x2060df0 00:29:24.122 [2024-10-28 05:05:14.696724] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:24.122 [2024-10-28 05:05:14.696750] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:24.122 [2024-10-28 05:05:14.696760] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:24.122 [2024-10-28 05:05:14.696781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.696790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.696797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2060df0) 00:29:24.122 [2024-10-28 05:05:14.696809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.122 [2024-10-28 05:05:14.696834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cea40, cid 0, qid 0 00:29:24.122 [2024-10-28 05:05:14.696973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.122 [2024-10-28 05:05:14.696988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.122 [2024-10-28 05:05:14.696995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.697002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cea40) on tqpair=0x2060df0 00:29:24.122 [2024-10-28 05:05:14.697012] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:24.122 [2024-10-28 05:05:14.697025] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:24.122 [2024-10-28 05:05:14.697037] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.697045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.697052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2060df0) 00:29:24.122 [2024-10-28 05:05:14.697063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.122 [2024-10-28 05:05:14.697085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cea40, cid 0, qid 0 00:29:24.122 [2024-10-28 05:05:14.697191] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.122 [2024-10-28 05:05:14.697206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.122 [2024-10-28 05:05:14.697213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.697220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cea40) on tqpair=0x2060df0 00:29:24.122 [2024-10-28 05:05:14.697228] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:24.122 [2024-10-28 05:05:14.697242] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:24.122 [2024-10-28 05:05:14.697255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.697262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.697268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2060df0) 00:29:24.122 [2024-10-28 05:05:14.697279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.122 [2024-10-28 05:05:14.697301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cea40, cid 0, qid 0 00:29:24.122 [2024-10-28 05:05:14.697407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.122 [2024-10-28 05:05:14.697419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.122 [2024-10-28 05:05:14.697426] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.697437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cea40) on tqpair=0x2060df0 00:29:24.122 [2024-10-28 05:05:14.697447] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:24.122 [2024-10-28 05:05:14.697470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.697481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.697487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2060df0) 00:29:24.122 [2024-10-28 05:05:14.697498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.122 [2024-10-28 05:05:14.697519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cea40, cid 0, qid 0 00:29:24.122 [2024-10-28 05:05:14.697621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.122 [2024-10-28 05:05:14.697646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.122 [2024-10-28 05:05:14.697661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.697670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cea40) on tqpair=0x2060df0 00:29:24.122 [2024-10-28 05:05:14.697679] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:24.122 [2024-10-28 05:05:14.697687] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:24.122 [2024-10-28 05:05:14.697701] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:24.122 [2024-10-28 05:05:14.697811] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:24.122 [2024-10-28 05:05:14.697819] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:24.122 [2024-10-28 05:05:14.697834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.697841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.697848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2060df0) 00:29:24.122 [2024-10-28 05:05:14.697858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.122 [2024-10-28 05:05:14.697881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cea40, cid 0, qid 0 00:29:24.122 [2024-10-28 05:05:14.697999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.122 [2024-10-28 05:05:14.698014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.122 [2024-10-28 05:05:14.698021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.698028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cea40) on tqpair=0x2060df0 00:29:24.122 [2024-10-28 05:05:14.698036] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:24.122 [2024-10-28 05:05:14.698053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.698062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.698069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2060df0) 00:29:24.122 [2024-10-28 05:05:14.698079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.122 [2024-10-28 05:05:14.698101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cea40, cid 0, qid 0 00:29:24.122 [2024-10-28 05:05:14.698204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.122 [2024-10-28 05:05:14.698226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.122 [2024-10-28 05:05:14.698234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.698241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cea40) on tqpair=0x2060df0 00:29:24.122 [2024-10-28 05:05:14.698249] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:24.122 [2024-10-28 05:05:14.698258] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:24.122 [2024-10-28 05:05:14.698272] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:24.122 [2024-10-28 05:05:14.698286] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:24.122 [2024-10-28 05:05:14.698302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.698310] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2060df0) 00:29:24.122 [2024-10-28 05:05:14.698321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.122 [2024-10-28 05:05:14.698343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cea40, cid 0, qid 0 00:29:24.122 [2024-10-28 05:05:14.698497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.122 [2024-10-28 05:05:14.698513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.122 [2024-10-28 05:05:14.698520] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.698527] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2060df0): datao=0, datal=4096, cccid=0 00:29:24.122 [2024-10-28 05:05:14.698535] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20cea40) on tqpair(0x2060df0): expected_datao=0, payload_size=4096 00:29:24.122 [2024-10-28 05:05:14.698543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.698553] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.698562] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.698607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.122 [2024-10-28 05:05:14.698619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.122 [2024-10-28 05:05:14.698626] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.122 [2024-10-28 05:05:14.698639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cea40) on tqpair=0x2060df0 00:29:24.122 [2024-10-28 05:05:14.698654] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:24.122 [2024-10-28 05:05:14.698663] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:24.122 [2024-10-28 05:05:14.698671] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:24.122 [2024-10-28 05:05:14.698679] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:24.122 [2024-10-28 05:05:14.698687] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:24.122 [2024-10-28 05:05:14.698695] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:24.123 [2024-10-28 05:05:14.698710] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:24.123 [2024-10-28 05:05:14.698723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.698730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.698742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2060df0) 00:29:24.123 [2024-10-28 05:05:14.698754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:24.123 [2024-10-28 05:05:14.698776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cea40, cid 0, qid 0 00:29:24.123 [2024-10-28 05:05:14.698931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.123 [2024-10-28 05:05:14.698946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.123 [2024-10-28 05:05:14.698953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.698960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cea40) on tqpair=0x2060df0 00:29:24.123 [2024-10-28 05:05:14.698977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.698986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.698992] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2060df0) 00:29:24.123 [2024-10-28 05:05:14.699003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.123 [2024-10-28 05:05:14.699013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2060df0) 00:29:24.123 [2024-10-28 05:05:14.699036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.123 [2024-10-28 05:05:14.699046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2060df0) 00:29:24.123 [2024-10-28 05:05:14.699067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.123 [2024-10-28 05:05:14.699077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2060df0) 00:29:24.123 [2024-10-28 05:05:14.699099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.123 [2024-10-28 05:05:14.699108] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:24.123 [2024-10-28 05:05:14.699122] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:24.123 [2024-10-28 05:05:14.699135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699143] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2060df0) 00:29:24.123 [2024-10-28 05:05:14.699168] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.123 [2024-10-28 05:05:14.699190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cea40, cid 0, qid 0 00:29:24.123 [2024-10-28 05:05:14.699202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cebc0, cid 1, qid 0 00:29:24.123 [2024-10-28 05:05:14.699224] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ced40, cid 2, qid 0 00:29:24.123 [2024-10-28 05:05:14.699233] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ceec0, cid 3, qid 0 00:29:24.123 [2024-10-28 05:05:14.699240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cf040, cid 4, qid 0 00:29:24.123 [2024-10-28 05:05:14.699445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.123 [2024-10-28 05:05:14.699461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.123 [2024-10-28 05:05:14.699468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cf040) on tqpair=0x2060df0 00:29:24.123 [2024-10-28 05:05:14.699489] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:24.123 [2024-10-28 05:05:14.699499] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:24.123 [2024-10-28 05:05:14.699517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2060df0) 00:29:24.123 [2024-10-28 05:05:14.699538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.123 [2024-10-28 05:05:14.699560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cf040, cid 4, qid 0 00:29:24.123 [2024-10-28 05:05:14.699723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.123 [2024-10-28 05:05:14.699739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.123 [2024-10-28 05:05:14.699746] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699752] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2060df0): datao=0, datal=4096, cccid=4 00:29:24.123 [2024-10-28 05:05:14.699760] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20cf040) on tqpair(0x2060df0): expected_datao=0, payload_size=4096 00:29:24.123 [2024-10-28 05:05:14.699767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699777] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699785] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.123 [2024-10-28 05:05:14.699840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.123 [2024-10-28 05:05:14.699847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cf040) on tqpair=0x2060df0 00:29:24.123 [2024-10-28 05:05:14.699871] nvme_ctrlr.c:4166:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:24.123 [2024-10-28 05:05:14.699908] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699919] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2060df0) 00:29:24.123 [2024-10-28 05:05:14.699930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.123 [2024-10-28 05:05:14.699941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.699955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2060df0) 00:29:24.123 [2024-10-28 05:05:14.699964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.123 [2024-10-28 05:05:14.699990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cf040, cid 4, qid 0 00:29:24.123 [2024-10-28 05:05:14.700003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cf1c0, cid 5, qid 0 00:29:24.123 [2024-10-28 05:05:14.700193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.123 [2024-10-28 05:05:14.700208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.123 [2024-10-28 05:05:14.700215] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.700226] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2060df0): datao=0, datal=1024, cccid=4 00:29:24.123 [2024-10-28 05:05:14.700234] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20cf040) on tqpair(0x2060df0): expected_datao=0, payload_size=1024 00:29:24.123 [2024-10-28 05:05:14.700241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.700251] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.700259] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.700267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.123 [2024-10-28 05:05:14.700276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.123 [2024-10-28 05:05:14.700283] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.123 [2024-10-28 05:05:14.700289] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cf1c0) on tqpair=0x2060df0 00:29:24.381 [2024-10-28 05:05:14.743662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.381 [2024-10-28 05:05:14.743682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.381 [2024-10-28 05:05:14.743690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.381 [2024-10-28 05:05:14.743697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cf040) on tqpair=0x2060df0 00:29:24.381 [2024-10-28 05:05:14.743714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.381 [2024-10-28 05:05:14.743723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2060df0) 00:29:24.381 [2024-10-28 05:05:14.743734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.381 [2024-10-28 05:05:14.743780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cf040, cid 4, qid 0 00:29:24.381 [2024-10-28 05:05:14.743913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.381 [2024-10-28 05:05:14.743929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.381 [2024-10-28 05:05:14.743936] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.381 [2024-10-28 05:05:14.743943] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2060df0): datao=0, datal=3072, cccid=4 00:29:24.381 [2024-10-28 05:05:14.743950] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20cf040) on tqpair(0x2060df0): expected_datao=0, payload_size=3072 00:29:24.381 [2024-10-28 05:05:14.743957] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.381 [2024-10-28 05:05:14.743968] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.381 [2024-10-28 05:05:14.743975] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.381 [2024-10-28 05:05:14.743989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.381 [2024-10-28 05:05:14.743999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.381 [2024-10-28 05:05:14.744005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.381 [2024-10-28 05:05:14.744012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cf040) on tqpair=0x2060df0 00:29:24.381 [2024-10-28 05:05:14.744026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.381 [2024-10-28 05:05:14.744035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2060df0) 00:29:24.381 [2024-10-28 05:05:14.744046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.381 [2024-10-28 05:05:14.744074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20cf040, cid 4, qid 0 00:29:24.381 [2024-10-28 05:05:14.744196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.381 [2024-10-28 05:05:14.744208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.381 [2024-10-28 05:05:14.744215] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.381 [2024-10-28 05:05:14.744221] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2060df0): datao=0, datal=8, cccid=4 00:29:24.381 [2024-10-28 05:05:14.744234] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20cf040) on tqpair(0x2060df0): expected_datao=0, payload_size=8 00:29:24.381 [2024-10-28 05:05:14.744242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.381 [2024-10-28 05:05:14.744251] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.381 [2024-10-28 05:05:14.744258] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.381 [2024-10-28 05:05:14.784731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.381 [2024-10-28 05:05:14.784751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.381 [2024-10-28 05:05:14.784759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.381 [2024-10-28 05:05:14.784766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cf040) on tqpair=0x2060df0 00:29:24.381 ===================================================== 00:29:24.381 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:24.381 ===================================================== 00:29:24.381 Controller Capabilities/Features 00:29:24.381 ================================ 00:29:24.381 Vendor ID: 0000 00:29:24.381 Subsystem Vendor ID: 0000 00:29:24.381 Serial Number: .................... 00:29:24.381 Model Number: ........................................ 00:29:24.381 Firmware Version: 25.01 00:29:24.381 Recommended Arb Burst: 0 00:29:24.381 IEEE OUI Identifier: 00 00 00 00:29:24.381 Multi-path I/O 00:29:24.381 May have multiple subsystem ports: No 00:29:24.381 May have multiple controllers: No 00:29:24.381 Associated with SR-IOV VF: No 00:29:24.381 Max Data Transfer Size: 131072 00:29:24.381 Max Number of Namespaces: 0 00:29:24.381 Max Number of I/O Queues: 1024 00:29:24.381 NVMe Specification Version (VS): 1.3 00:29:24.381 NVMe Specification Version (Identify): 1.3 00:29:24.381 Maximum Queue Entries: 128 00:29:24.381 Contiguous Queues Required: Yes 00:29:24.381 Arbitration Mechanisms Supported 00:29:24.381 Weighted Round Robin: Not Supported 00:29:24.381 Vendor Specific: Not Supported 00:29:24.381 Reset Timeout: 15000 ms 00:29:24.381 Doorbell Stride: 4 bytes 00:29:24.381 NVM Subsystem Reset: Not Supported 00:29:24.381 Command Sets Supported 00:29:24.381 NVM Command Set: Supported 00:29:24.381 Boot Partition: Not Supported 00:29:24.381 Memory Page Size Minimum: 4096 bytes 00:29:24.381 Memory Page Size Maximum: 4096 bytes 00:29:24.381 Persistent Memory Region: Not Supported 00:29:24.381 Optional Asynchronous Events Supported 00:29:24.381 Namespace Attribute Notices: Not Supported 00:29:24.381 Firmware Activation Notices: Not Supported 00:29:24.381 ANA Change Notices: Not Supported 00:29:24.381 PLE Aggregate Log Change Notices: Not Supported 00:29:24.381 LBA Status Info Alert Notices: Not Supported 00:29:24.381 EGE Aggregate Log Change Notices: Not Supported 00:29:24.381 Normal NVM Subsystem Shutdown event: Not Supported 00:29:24.381 Zone Descriptor Change Notices: Not Supported 00:29:24.381 Discovery Log Change Notices: Supported 00:29:24.381 Controller Attributes 00:29:24.381 128-bit Host Identifier: Not Supported 00:29:24.381 Non-Operational Permissive Mode: Not Supported 00:29:24.381 NVM Sets: Not Supported 00:29:24.381 Read Recovery Levels: Not Supported 00:29:24.381 Endurance Groups: Not Supported 00:29:24.381 Predictable Latency Mode: Not Supported 00:29:24.381 Traffic Based Keep ALive: Not Supported 00:29:24.381 Namespace Granularity: Not Supported 00:29:24.381 SQ Associations: Not Supported 00:29:24.381 UUID List: Not Supported 00:29:24.381 Multi-Domain Subsystem: Not Supported 00:29:24.381 Fixed Capacity Management: Not Supported 00:29:24.381 Variable Capacity Management: Not Supported 00:29:24.381 Delete Endurance Group: Not Supported 00:29:24.381 Delete NVM Set: Not Supported 00:29:24.381 Extended LBA Formats Supported: Not Supported 00:29:24.381 Flexible Data Placement Supported: Not Supported 00:29:24.381 00:29:24.381 Controller Memory Buffer Support 00:29:24.381 ================================ 00:29:24.381 Supported: No 00:29:24.381 00:29:24.381 Persistent Memory Region Support 00:29:24.381 ================================ 00:29:24.381 Supported: No 00:29:24.381 00:29:24.381 Admin Command Set Attributes 00:29:24.381 ============================ 00:29:24.381 Security Send/Receive: Not Supported 00:29:24.381 Format NVM: Not Supported 00:29:24.381 Firmware Activate/Download: Not Supported 00:29:24.381 Namespace Management: Not Supported 00:29:24.381 Device Self-Test: Not Supported 00:29:24.381 Directives: Not Supported 00:29:24.381 NVMe-MI: Not Supported 00:29:24.382 Virtualization Management: Not Supported 00:29:24.382 Doorbell Buffer Config: Not Supported 00:29:24.382 Get LBA Status Capability: Not Supported 00:29:24.382 Command & Feature Lockdown Capability: Not Supported 00:29:24.382 Abort Command Limit: 1 00:29:24.382 Async Event Request Limit: 4 00:29:24.382 Number of Firmware Slots: N/A 00:29:24.382 Firmware Slot 1 Read-Only: N/A 00:29:24.382 Firmware Activation Without Reset: N/A 00:29:24.382 Multiple Update Detection Support: N/A 00:29:24.382 Firmware Update Granularity: No Information Provided 00:29:24.382 Per-Namespace SMART Log: No 00:29:24.382 Asymmetric Namespace Access Log Page: Not Supported 00:29:24.382 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:24.382 Command Effects Log Page: Not Supported 00:29:24.382 Get Log Page Extended Data: Supported 00:29:24.382 Telemetry Log Pages: Not Supported 00:29:24.382 Persistent Event Log Pages: Not Supported 00:29:24.382 Supported Log Pages Log Page: May Support 00:29:24.382 Commands Supported & Effects Log Page: Not Supported 00:29:24.382 Feature Identifiers & Effects Log Page:May Support 00:29:24.382 NVMe-MI Commands & Effects Log Page: May Support 00:29:24.382 Data Area 4 for Telemetry Log: Not Supported 00:29:24.382 Error Log Page Entries Supported: 128 00:29:24.382 Keep Alive: Not Supported 00:29:24.382 00:29:24.382 NVM Command Set Attributes 00:29:24.382 ========================== 00:29:24.382 Submission Queue Entry Size 00:29:24.382 Max: 1 00:29:24.382 Min: 1 00:29:24.382 Completion Queue Entry Size 00:29:24.382 Max: 1 00:29:24.382 Min: 1 00:29:24.382 Number of Namespaces: 0 00:29:24.382 Compare Command: Not Supported 00:29:24.382 Write Uncorrectable Command: Not Supported 00:29:24.382 Dataset Management Command: Not Supported 00:29:24.382 Write Zeroes Command: Not Supported 00:29:24.382 Set Features Save Field: Not Supported 00:29:24.382 Reservations: Not Supported 00:29:24.382 Timestamp: Not Supported 00:29:24.382 Copy: Not Supported 00:29:24.382 Volatile Write Cache: Not Present 00:29:24.382 Atomic Write Unit (Normal): 1 00:29:24.382 Atomic Write Unit (PFail): 1 00:29:24.382 Atomic Compare & Write Unit: 1 00:29:24.382 Fused Compare & Write: Supported 00:29:24.382 Scatter-Gather List 00:29:24.382 SGL Command Set: Supported 00:29:24.382 SGL Keyed: Supported 00:29:24.382 SGL Bit Bucket Descriptor: Not Supported 00:29:24.382 SGL Metadata Pointer: Not Supported 00:29:24.382 Oversized SGL: Not Supported 00:29:24.382 SGL Metadata Address: Not Supported 00:29:24.382 SGL Offset: Supported 00:29:24.382 Transport SGL Data Block: Not Supported 00:29:24.382 Replay Protected Memory Block: Not Supported 00:29:24.382 00:29:24.382 Firmware Slot Information 00:29:24.382 ========================= 00:29:24.382 Active slot: 0 00:29:24.382 00:29:24.382 00:29:24.382 Error Log 00:29:24.382 ========= 00:29:24.382 00:29:24.382 Active Namespaces 00:29:24.382 ================= 00:29:24.382 Discovery Log Page 00:29:24.382 ================== 00:29:24.382 Generation Counter: 2 00:29:24.382 Number of Records: 2 00:29:24.382 Record Format: 0 00:29:24.382 00:29:24.382 Discovery Log Entry 0 00:29:24.382 ---------------------- 00:29:24.382 Transport Type: 3 (TCP) 00:29:24.382 Address Family: 1 (IPv4) 00:29:24.382 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:24.382 Entry Flags: 00:29:24.382 Duplicate Returned Information: 1 00:29:24.382 Explicit Persistent Connection Support for Discovery: 1 00:29:24.382 Transport Requirements: 00:29:24.382 Secure Channel: Not Required 00:29:24.382 Port ID: 0 (0x0000) 00:29:24.382 Controller ID: 65535 (0xffff) 00:29:24.382 Admin Max SQ Size: 128 00:29:24.382 Transport Service Identifier: 4420 00:29:24.382 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:24.382 Transport Address: 10.0.0.2 00:29:24.382 Discovery Log Entry 1 00:29:24.382 ---------------------- 00:29:24.382 Transport Type: 3 (TCP) 00:29:24.382 Address Family: 1 (IPv4) 00:29:24.382 Subsystem Type: 2 (NVM Subsystem) 00:29:24.382 Entry Flags: 00:29:24.382 Duplicate Returned Information: 0 00:29:24.382 Explicit Persistent Connection Support for Discovery: 0 00:29:24.382 Transport Requirements: 00:29:24.382 Secure Channel: Not Required 00:29:24.382 Port ID: 0 (0x0000) 00:29:24.382 Controller ID: 65535 (0xffff) 00:29:24.382 Admin Max SQ Size: 128 00:29:24.382 Transport Service Identifier: 4420 00:29:24.382 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:24.382 Transport Address: 10.0.0.2 [2024-10-28 05:05:14.784887] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:24.382 [2024-10-28 05:05:14.784910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cea40) on tqpair=0x2060df0 00:29:24.382 [2024-10-28 05:05:14.784923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.382 [2024-10-28 05:05:14.784933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20cebc0) on tqpair=0x2060df0 00:29:24.382 [2024-10-28 05:05:14.784941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.382 [2024-10-28 05:05:14.784949] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ced40) on tqpair=0x2060df0 00:29:24.382 [2024-10-28 05:05:14.784956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.382 [2024-10-28 05:05:14.784964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ceec0) on tqpair=0x2060df0 00:29:24.382 [2024-10-28 05:05:14.784971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.382 [2024-10-28 05:05:14.784984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.382 [2024-10-28 05:05:14.784992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.382 [2024-10-28 05:05:14.784998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2060df0) 00:29:24.382 [2024-10-28 05:05:14.785024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.382 [2024-10-28 05:05:14.785049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ceec0, cid 3, qid 0 00:29:24.382 [2024-10-28 05:05:14.785199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.382 [2024-10-28 05:05:14.785215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.382 [2024-10-28 05:05:14.785222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.382 [2024-10-28 05:05:14.785229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ceec0) on tqpair=0x2060df0 00:29:24.382 [2024-10-28 05:05:14.785246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.382 [2024-10-28 05:05:14.785256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.382 [2024-10-28 05:05:14.785262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2060df0) 00:29:24.382 [2024-10-28 05:05:14.785273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.382 [2024-10-28 05:05:14.785300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ceec0, cid 3, qid 0 00:29:24.382 [2024-10-28 05:05:14.785417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.382 [2024-10-28 05:05:14.785432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.382 [2024-10-28 05:05:14.785439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.382 [2024-10-28 05:05:14.785449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ceec0) on tqpair=0x2060df0 00:29:24.382 [2024-10-28 05:05:14.785459] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:24.382 [2024-10-28 05:05:14.785467] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:24.382 [2024-10-28 05:05:14.785483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.382 [2024-10-28 05:05:14.785492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.382 [2024-10-28 05:05:14.785498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2060df0) 00:29:24.382 [2024-10-28 05:05:14.785509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.382 [2024-10-28 05:05:14.785530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ceec0, cid 3, qid 0 00:29:24.382 [2024-10-28 05:05:14.785631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.382 [2024-10-28 05:05:14.785662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.382 [2024-10-28 05:05:14.785673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.382 [2024-10-28 05:05:14.785680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ceec0) on tqpair=0x2060df0 00:29:24.382 [2024-10-28 05:05:14.785699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.382 [2024-10-28 05:05:14.785710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.382 [2024-10-28 05:05:14.785716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2060df0) 00:29:24.382 [2024-10-28 05:05:14.785727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.382 [2024-10-28 05:05:14.785749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ceec0, cid 3, qid 0 00:29:24.382 [2024-10-28 05:05:14.785855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.382 [2024-10-28 05:05:14.785868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.382 [2024-10-28 05:05:14.785875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.382 [2024-10-28 05:05:14.785881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ceec0) on tqpair=0x2060df0 00:29:24.382 [2024-10-28 05:05:14.785897] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.382 [2024-10-28 05:05:14.785906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.382 [2024-10-28 05:05:14.785912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2060df0) 00:29:24.382 [2024-10-28 05:05:14.785923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.383 [2024-10-28 05:05:14.785943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ceec0, cid 3, qid 0 00:29:24.383 [2024-10-28 05:05:14.786049] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.383 [2024-10-28 05:05:14.786065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.383 [2024-10-28 05:05:14.786072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.383 [2024-10-28 05:05:14.786080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ceec0) on tqpair=0x2060df0 00:29:24.383 [2024-10-28 05:05:14.786096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.383 [2024-10-28 05:05:14.786105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.383 [2024-10-28 05:05:14.786112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2060df0) 00:29:24.383 [2024-10-28 05:05:14.786123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.383 [2024-10-28 05:05:14.786144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ceec0, cid 3, qid 0 00:29:24.383 [2024-10-28 05:05:14.786245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.383 [2024-10-28 05:05:14.786261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.383 [2024-10-28 05:05:14.786270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.383 [2024-10-28 05:05:14.786277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ceec0) on tqpair=0x2060df0 00:29:24.383 [2024-10-28 05:05:14.786293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.383 [2024-10-28 05:05:14.786303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.383 [2024-10-28 05:05:14.786310] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2060df0) 00:29:24.383 [2024-10-28 05:05:14.786320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.383 [2024-10-28 05:05:14.786341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ceec0, cid 3, qid 0 00:29:24.383 [2024-10-28 05:05:14.786438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.383 [2024-10-28 05:05:14.786452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.383 [2024-10-28 05:05:14.786458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.383 [2024-10-28 05:05:14.786465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ceec0) on tqpair=0x2060df0 00:29:24.383 [2024-10-28 05:05:14.786480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.383 [2024-10-28 05:05:14.786489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.383 [2024-10-28 05:05:14.786495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2060df0) 00:29:24.383 [2024-10-28 05:05:14.786506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.383 [2024-10-28 05:05:14.786526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ceec0, cid 3, qid 0 00:29:24.383 [2024-10-28 05:05:14.786629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.383 [2024-10-28 05:05:14.790672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.383 [2024-10-28 05:05:14.790682] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.383 [2024-10-28 05:05:14.790690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ceec0) on tqpair=0x2060df0 00:29:24.383 [2024-10-28 05:05:14.790708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.383 [2024-10-28 05:05:14.790733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.383 [2024-10-28 05:05:14.790740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2060df0) 00:29:24.383 [2024-10-28 05:05:14.790751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.383 [2024-10-28 05:05:14.790774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20ceec0, cid 3, qid 0 00:29:24.383 [2024-10-28 05:05:14.790889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.383 [2024-10-28 05:05:14.790905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.383 [2024-10-28 05:05:14.790911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.383 [2024-10-28 05:05:14.790918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20ceec0) on tqpair=0x2060df0 00:29:24.383 [2024-10-28 05:05:14.790931] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:29:24.383 00:29:24.383 05:05:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:24.383 [2024-10-28 05:05:14.827097] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:29:24.383 [2024-10-28 05:05:14.827149] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2416090 ] 00:29:24.383 [2024-10-28 05:05:14.943907] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:24.644 [2024-10-28 05:05:14.978143] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:24.644 [2024-10-28 05:05:14.978194] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:24.644 [2024-10-28 05:05:14.978204] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:24.644 [2024-10-28 05:05:14.978219] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:24.644 [2024-10-28 05:05:14.978231] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:24.644 [2024-10-28 05:05:14.981909] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:24.644 [2024-10-28 05:05:14.981969] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1577df0 0 00:29:24.644 [2024-10-28 05:05:14.988658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:24.644 [2024-10-28 05:05:14.988679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:24.644 [2024-10-28 05:05:14.988687] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:24.644 [2024-10-28 05:05:14.988693] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:24.644 [2024-10-28 05:05:14.988724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.988737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.988743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1577df0) 00:29:24.644 [2024-10-28 05:05:14.988758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:24.644 [2024-10-28 05:05:14.988785] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5a40, cid 0, qid 0 00:29:24.644 [2024-10-28 05:05:14.996669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.644 [2024-10-28 05:05:14.996688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.644 [2024-10-28 05:05:14.996695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.996702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5a40) on tqpair=0x1577df0 00:29:24.644 [2024-10-28 05:05:14.996720] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:24.644 [2024-10-28 05:05:14.996733] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:24.644 [2024-10-28 05:05:14.996743] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:24.644 [2024-10-28 05:05:14.996761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.996774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.996781] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1577df0) 00:29:24.644 [2024-10-28 05:05:14.996793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.644 [2024-10-28 05:05:14.996817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5a40, cid 0, qid 0 00:29:24.644 [2024-10-28 05:05:14.996970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.644 [2024-10-28 05:05:14.996988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.644 [2024-10-28 05:05:14.996997] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.997004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5a40) on tqpair=0x1577df0 00:29:24.644 [2024-10-28 05:05:14.997017] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:24.644 [2024-10-28 05:05:14.997033] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:24.644 [2024-10-28 05:05:14.997049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.997057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.997063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1577df0) 00:29:24.644 [2024-10-28 05:05:14.997074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.644 [2024-10-28 05:05:14.997097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5a40, cid 0, qid 0 00:29:24.644 [2024-10-28 05:05:14.997219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.644 [2024-10-28 05:05:14.997235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.644 [2024-10-28 05:05:14.997242] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.997248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5a40) on tqpair=0x1577df0 00:29:24.644 [2024-10-28 05:05:14.997257] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:24.644 [2024-10-28 05:05:14.997273] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:24.644 [2024-10-28 05:05:14.997291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.997300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.997306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1577df0) 00:29:24.644 [2024-10-28 05:05:14.997317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.644 [2024-10-28 05:05:14.997340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5a40, cid 0, qid 0 00:29:24.644 [2024-10-28 05:05:14.997452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.644 [2024-10-28 05:05:14.997469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.644 [2024-10-28 05:05:14.997477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.997483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5a40) on tqpair=0x1577df0 00:29:24.644 [2024-10-28 05:05:14.997492] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:24.644 [2024-10-28 05:05:14.997515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.997527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.997533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1577df0) 00:29:24.644 [2024-10-28 05:05:14.997544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.644 [2024-10-28 05:05:14.997567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5a40, cid 0, qid 0 00:29:24.644 [2024-10-28 05:05:14.997679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.644 [2024-10-28 05:05:14.997698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.644 [2024-10-28 05:05:14.997705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.997712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5a40) on tqpair=0x1577df0 00:29:24.644 [2024-10-28 05:05:14.997720] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:24.644 [2024-10-28 05:05:14.997729] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:24.644 [2024-10-28 05:05:14.997748] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:24.644 [2024-10-28 05:05:14.997861] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:24.644 [2024-10-28 05:05:14.997870] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:24.644 [2024-10-28 05:05:14.997882] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.997891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.997912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1577df0) 00:29:24.644 [2024-10-28 05:05:14.997923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.644 [2024-10-28 05:05:14.997945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5a40, cid 0, qid 0 00:29:24.644 [2024-10-28 05:05:14.998108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.644 [2024-10-28 05:05:14.998124] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.644 [2024-10-28 05:05:14.998131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.998138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5a40) on tqpair=0x1577df0 00:29:24.644 [2024-10-28 05:05:14.998146] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:24.644 [2024-10-28 05:05:14.998164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.998175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.998182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1577df0) 00:29:24.644 [2024-10-28 05:05:14.998192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.644 [2024-10-28 05:05:14.998215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5a40, cid 0, qid 0 00:29:24.644 [2024-10-28 05:05:14.998318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.644 [2024-10-28 05:05:14.998335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.644 [2024-10-28 05:05:14.998345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.644 [2024-10-28 05:05:14.998352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5a40) on tqpair=0x1577df0 00:29:24.644 [2024-10-28 05:05:14.998359] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:24.644 [2024-10-28 05:05:14.998368] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:24.644 [2024-10-28 05:05:14.998382] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:24.644 [2024-10-28 05:05:14.998413] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:24.644 [2024-10-28 05:05:14.998428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.998436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1577df0) 00:29:24.645 [2024-10-28 05:05:14.998447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.645 [2024-10-28 05:05:14.998469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5a40, cid 0, qid 0 00:29:24.645 [2024-10-28 05:05:14.998661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.645 [2024-10-28 05:05:14.998680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.645 [2024-10-28 05:05:14.998696] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.998709] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1577df0): datao=0, datal=4096, cccid=0 00:29:24.645 [2024-10-28 05:05:14.998721] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15e5a40) on tqpair(0x1577df0): expected_datao=0, payload_size=4096 00:29:24.645 [2024-10-28 05:05:14.998734] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.998747] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.998755] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.998767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.645 [2024-10-28 05:05:14.998777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.645 [2024-10-28 05:05:14.998784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.998790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5a40) on tqpair=0x1577df0 00:29:24.645 [2024-10-28 05:05:14.998801] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:24.645 [2024-10-28 05:05:14.998810] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:24.645 [2024-10-28 05:05:14.998817] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:24.645 [2024-10-28 05:05:14.998824] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:24.645 [2024-10-28 05:05:14.998831] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:24.645 [2024-10-28 05:05:14.998839] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:24.645 [2024-10-28 05:05:14.998855] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:24.645 [2024-10-28 05:05:14.998869] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.998877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.998884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1577df0) 00:29:24.645 [2024-10-28 05:05:14.998895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:24.645 [2024-10-28 05:05:14.998918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5a40, cid 0, qid 0 00:29:24.645 [2024-10-28 05:05:14.999057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.645 [2024-10-28 05:05:14.999075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.645 [2024-10-28 05:05:14.999084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.999091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5a40) on tqpair=0x1577df0 00:29:24.645 [2024-10-28 05:05:14.999106] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.999115] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.999122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1577df0) 00:29:24.645 [2024-10-28 05:05:14.999132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.645 [2024-10-28 05:05:14.999142] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.999149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.999155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1577df0) 00:29:24.645 [2024-10-28 05:05:14.999164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.645 [2024-10-28 05:05:14.999177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.999184] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.999191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1577df0) 00:29:24.645 [2024-10-28 05:05:14.999199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.645 [2024-10-28 05:05:14.999209] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.999215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.999221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1577df0) 00:29:24.645 [2024-10-28 05:05:14.999245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.645 [2024-10-28 05:05:14.999254] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:24.645 [2024-10-28 05:05:14.999270] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:24.645 [2024-10-28 05:05:14.999284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.999291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1577df0) 00:29:24.645 [2024-10-28 05:05:14.999301] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.645 [2024-10-28 05:05:14.999339] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5a40, cid 0, qid 0 00:29:24.645 [2024-10-28 05:05:14.999350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5bc0, cid 1, qid 0 00:29:24.645 [2024-10-28 05:05:14.999358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5d40, cid 2, qid 0 00:29:24.645 [2024-10-28 05:05:14.999365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5ec0, cid 3, qid 0 00:29:24.645 [2024-10-28 05:05:14.999372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e6040, cid 4, qid 0 00:29:24.645 [2024-10-28 05:05:14.999579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.645 [2024-10-28 05:05:14.999596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.645 [2024-10-28 05:05:14.999605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.999612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e6040) on tqpair=0x1577df0 00:29:24.645 [2024-10-28 05:05:14.999625] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:24.645 [2024-10-28 05:05:14.999643] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:24.645 [2024-10-28 05:05:14.999662] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:24.645 [2024-10-28 05:05:14.999676] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:24.645 [2024-10-28 05:05:14.999687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.999695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.999701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1577df0) 00:29:24.645 [2024-10-28 05:05:14.999712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:24.645 [2024-10-28 05:05:14.999734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e6040, cid 4, qid 0 00:29:24.645 [2024-10-28 05:05:14.999870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.645 [2024-10-28 05:05:14.999892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.645 [2024-10-28 05:05:14.999901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:14.999908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e6040) on tqpair=0x1577df0 00:29:24.645 [2024-10-28 05:05:14.999976] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:24.645 [2024-10-28 05:05:14.999998] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:24.645 [2024-10-28 05:05:15.000015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:15.000023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1577df0) 00:29:24.645 [2024-10-28 05:05:15.000034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.645 [2024-10-28 05:05:15.000071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e6040, cid 4, qid 0 00:29:24.645 [2024-10-28 05:05:15.000260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.645 [2024-10-28 05:05:15.000277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.645 [2024-10-28 05:05:15.000289] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:15.000299] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1577df0): datao=0, datal=4096, cccid=4 00:29:24.645 [2024-10-28 05:05:15.000311] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15e6040) on tqpair(0x1577df0): expected_datao=0, payload_size=4096 00:29:24.645 [2024-10-28 05:05:15.000323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:15.000348] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:15.000358] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:15.000405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.645 [2024-10-28 05:05:15.000421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.645 [2024-10-28 05:05:15.000431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:15.000438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e6040) on tqpair=0x1577df0 00:29:24.645 [2024-10-28 05:05:15.000452] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:24.645 [2024-10-28 05:05:15.000469] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:24.645 [2024-10-28 05:05:15.000490] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:24.645 [2024-10-28 05:05:15.000510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:15.000519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1577df0) 00:29:24.645 [2024-10-28 05:05:15.000530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.645 [2024-10-28 05:05:15.000553] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e6040, cid 4, qid 0 00:29:24.645 [2024-10-28 05:05:15.004662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.645 [2024-10-28 05:05:15.004679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.645 [2024-10-28 05:05:15.004686] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.645 [2024-10-28 05:05:15.004692] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1577df0): datao=0, datal=4096, cccid=4 00:29:24.645 [2024-10-28 05:05:15.004700] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15e6040) on tqpair(0x1577df0): expected_datao=0, payload_size=4096 00:29:24.646 [2024-10-28 05:05:15.004707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.004720] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.004728] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.004737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.646 [2024-10-28 05:05:15.004745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.646 [2024-10-28 05:05:15.004752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.004758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e6040) on tqpair=0x1577df0 00:29:24.646 [2024-10-28 05:05:15.004778] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:24.646 [2024-10-28 05:05:15.004799] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:24.646 [2024-10-28 05:05:15.004816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.004824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1577df0) 00:29:24.646 [2024-10-28 05:05:15.004834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.646 [2024-10-28 05:05:15.004858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e6040, cid 4, qid 0 00:29:24.646 [2024-10-28 05:05:15.005047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.646 [2024-10-28 05:05:15.005064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.646 [2024-10-28 05:05:15.005071] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.005080] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1577df0): datao=0, datal=4096, cccid=4 00:29:24.646 [2024-10-28 05:05:15.005092] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15e6040) on tqpair(0x1577df0): expected_datao=0, payload_size=4096 00:29:24.646 [2024-10-28 05:05:15.005104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.005120] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.005132] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.005144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.646 [2024-10-28 05:05:15.005154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.646 [2024-10-28 05:05:15.005161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.005168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e6040) on tqpair=0x1577df0 00:29:24.646 [2024-10-28 05:05:15.005181] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:24.646 [2024-10-28 05:05:15.005197] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:24.646 [2024-10-28 05:05:15.005216] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:24.646 [2024-10-28 05:05:15.005227] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:24.646 [2024-10-28 05:05:15.005236] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:24.646 [2024-10-28 05:05:15.005245] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:24.646 [2024-10-28 05:05:15.005254] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:24.646 [2024-10-28 05:05:15.005262] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:24.646 [2024-10-28 05:05:15.005274] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:24.646 [2024-10-28 05:05:15.005294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.005303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1577df0) 00:29:24.646 [2024-10-28 05:05:15.005313] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.646 [2024-10-28 05:05:15.005343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.005350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.005356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1577df0) 00:29:24.646 [2024-10-28 05:05:15.005365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.646 [2024-10-28 05:05:15.005405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e6040, cid 4, qid 0 00:29:24.646 [2024-10-28 05:05:15.005417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e61c0, cid 5, qid 0 00:29:24.646 [2024-10-28 05:05:15.005610] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.646 [2024-10-28 05:05:15.005628] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.646 [2024-10-28 05:05:15.005645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.005653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e6040) on tqpair=0x1577df0 00:29:24.646 [2024-10-28 05:05:15.005664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.646 [2024-10-28 05:05:15.005674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.646 [2024-10-28 05:05:15.005680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.005687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e61c0) on tqpair=0x1577df0 00:29:24.646 [2024-10-28 05:05:15.005705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.005715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1577df0) 00:29:24.646 [2024-10-28 05:05:15.005726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.646 [2024-10-28 05:05:15.005748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e61c0, cid 5, qid 0 00:29:24.646 [2024-10-28 05:05:15.005912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.646 [2024-10-28 05:05:15.005928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.646 [2024-10-28 05:05:15.005935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.005942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e61c0) on tqpair=0x1577df0 00:29:24.646 [2024-10-28 05:05:15.005960] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.005971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1577df0) 00:29:24.646 [2024-10-28 05:05:15.005982] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.646 [2024-10-28 05:05:15.006003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e61c0, cid 5, qid 0 00:29:24.646 [2024-10-28 05:05:15.006117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.646 [2024-10-28 05:05:15.006135] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.646 [2024-10-28 05:05:15.006143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.006150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e61c0) on tqpair=0x1577df0 00:29:24.646 [2024-10-28 05:05:15.006167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.006181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1577df0) 00:29:24.646 [2024-10-28 05:05:15.006196] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.646 [2024-10-28 05:05:15.006218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e61c0, cid 5, qid 0 00:29:24.646 [2024-10-28 05:05:15.006332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.646 [2024-10-28 05:05:15.006348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.646 [2024-10-28 05:05:15.006355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.006362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e61c0) on tqpair=0x1577df0 00:29:24.646 [2024-10-28 05:05:15.006388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.006399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1577df0) 00:29:24.646 [2024-10-28 05:05:15.006410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.646 [2024-10-28 05:05:15.006422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.006430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1577df0) 00:29:24.646 [2024-10-28 05:05:15.006440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.646 [2024-10-28 05:05:15.006451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.006459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1577df0) 00:29:24.646 [2024-10-28 05:05:15.006468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.646 [2024-10-28 05:05:15.006486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.006510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1577df0) 00:29:24.646 [2024-10-28 05:05:15.006520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.646 [2024-10-28 05:05:15.006543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e61c0, cid 5, qid 0 00:29:24.646 [2024-10-28 05:05:15.006554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e6040, cid 4, qid 0 00:29:24.646 [2024-10-28 05:05:15.006578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e6340, cid 6, qid 0 00:29:24.646 [2024-10-28 05:05:15.006585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e64c0, cid 7, qid 0 00:29:24.646 [2024-10-28 05:05:15.006838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.646 [2024-10-28 05:05:15.006860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.646 [2024-10-28 05:05:15.006874] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.006884] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1577df0): datao=0, datal=8192, cccid=5 00:29:24.646 [2024-10-28 05:05:15.006896] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15e61c0) on tqpair(0x1577df0): expected_datao=0, payload_size=8192 00:29:24.646 [2024-10-28 05:05:15.006910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.006934] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.006944] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.006959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.646 [2024-10-28 05:05:15.006977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.646 [2024-10-28 05:05:15.006988] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.646 [2024-10-28 05:05:15.007004] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1577df0): datao=0, datal=512, cccid=4 00:29:24.646 [2024-10-28 05:05:15.007014] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15e6040) on tqpair(0x1577df0): expected_datao=0, payload_size=512 00:29:24.647 [2024-10-28 05:05:15.007021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.647 [2024-10-28 05:05:15.007031] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.647 [2024-10-28 05:05:15.007038] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.647 [2024-10-28 05:05:15.007046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.647 [2024-10-28 05:05:15.007055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.647 [2024-10-28 05:05:15.007062] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.647 [2024-10-28 05:05:15.007068] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1577df0): datao=0, datal=512, cccid=6 00:29:24.647 [2024-10-28 05:05:15.007075] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15e6340) on tqpair(0x1577df0): expected_datao=0, payload_size=512 00:29:24.647 [2024-10-28 05:05:15.007082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.647 [2024-10-28 05:05:15.007091] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.647 [2024-10-28 05:05:15.007098] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.647 [2024-10-28 05:05:15.007106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.647 [2024-10-28 05:05:15.007114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.647 [2024-10-28 05:05:15.007121] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.647 [2024-10-28 05:05:15.007127] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1577df0): datao=0, datal=4096, cccid=7 00:29:24.647 [2024-10-28 05:05:15.007134] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15e64c0) on tqpair(0x1577df0): expected_datao=0, payload_size=4096 00:29:24.647 [2024-10-28 05:05:15.007141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.647 [2024-10-28 05:05:15.007150] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.647 [2024-10-28 05:05:15.007157] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.647 [2024-10-28 05:05:15.007169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.647 [2024-10-28 05:05:15.007178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.647 [2024-10-28 05:05:15.007185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.647 [2024-10-28 05:05:15.007191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e61c0) on tqpair=0x1577df0 00:29:24.647 [2024-10-28 05:05:15.007225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.647 [2024-10-28 05:05:15.007237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.647 [2024-10-28 05:05:15.007244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.647 [2024-10-28 05:05:15.007250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e6040) on tqpair=0x1577df0 00:29:24.647 [2024-10-28 05:05:15.007280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.647 [2024-10-28 05:05:15.007291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.647 [2024-10-28 05:05:15.007297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.647 [2024-10-28 05:05:15.007303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e6340) on tqpair=0x1577df0 00:29:24.647 [2024-10-28 05:05:15.007313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.647 [2024-10-28 05:05:15.007322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.647 [2024-10-28 05:05:15.007328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.647 [2024-10-28 05:05:15.007334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e64c0) on tqpair=0x1577df0 00:29:24.647 ===================================================== 00:29:24.647 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:24.647 ===================================================== 00:29:24.647 Controller Capabilities/Features 00:29:24.647 ================================ 00:29:24.647 Vendor ID: 8086 00:29:24.647 Subsystem Vendor ID: 8086 00:29:24.647 Serial Number: SPDK00000000000001 00:29:24.647 Model Number: SPDK bdev Controller 00:29:24.647 Firmware Version: 25.01 00:29:24.647 Recommended Arb Burst: 6 00:29:24.647 IEEE OUI Identifier: e4 d2 5c 00:29:24.647 Multi-path I/O 00:29:24.647 May have multiple subsystem ports: Yes 00:29:24.647 May have multiple controllers: Yes 00:29:24.647 Associated with SR-IOV VF: No 00:29:24.647 Max Data Transfer Size: 131072 00:29:24.647 Max Number of Namespaces: 32 00:29:24.647 Max Number of I/O Queues: 127 00:29:24.647 NVMe Specification Version (VS): 1.3 00:29:24.647 NVMe Specification Version (Identify): 1.3 00:29:24.647 Maximum Queue Entries: 128 00:29:24.647 Contiguous Queues Required: Yes 00:29:24.647 Arbitration Mechanisms Supported 00:29:24.647 Weighted Round Robin: Not Supported 00:29:24.647 Vendor Specific: Not Supported 00:29:24.647 Reset Timeout: 15000 ms 00:29:24.647 Doorbell Stride: 4 bytes 00:29:24.647 NVM Subsystem Reset: Not Supported 00:29:24.647 Command Sets Supported 00:29:24.647 NVM Command Set: Supported 00:29:24.647 Boot Partition: Not Supported 00:29:24.647 Memory Page Size Minimum: 4096 bytes 00:29:24.647 Memory Page Size Maximum: 4096 bytes 00:29:24.647 Persistent Memory Region: Not Supported 00:29:24.647 Optional Asynchronous Events Supported 00:29:24.647 Namespace Attribute Notices: Supported 00:29:24.647 Firmware Activation Notices: Not Supported 00:29:24.647 ANA Change Notices: Not Supported 00:29:24.647 PLE Aggregate Log Change Notices: Not Supported 00:29:24.647 LBA Status Info Alert Notices: Not Supported 00:29:24.647 EGE Aggregate Log Change Notices: Not Supported 00:29:24.647 Normal NVM Subsystem Shutdown event: Not Supported 00:29:24.647 Zone Descriptor Change Notices: Not Supported 00:29:24.647 Discovery Log Change Notices: Not Supported 00:29:24.647 Controller Attributes 00:29:24.647 128-bit Host Identifier: Supported 00:29:24.647 Non-Operational Permissive Mode: Not Supported 00:29:24.647 NVM Sets: Not Supported 00:29:24.647 Read Recovery Levels: Not Supported 00:29:24.647 Endurance Groups: Not Supported 00:29:24.647 Predictable Latency Mode: Not Supported 00:29:24.647 Traffic Based Keep ALive: Not Supported 00:29:24.647 Namespace Granularity: Not Supported 00:29:24.647 SQ Associations: Not Supported 00:29:24.647 UUID List: Not Supported 00:29:24.647 Multi-Domain Subsystem: Not Supported 00:29:24.647 Fixed Capacity Management: Not Supported 00:29:24.647 Variable Capacity Management: Not Supported 00:29:24.647 Delete Endurance Group: Not Supported 00:29:24.647 Delete NVM Set: Not Supported 00:29:24.647 Extended LBA Formats Supported: Not Supported 00:29:24.647 Flexible Data Placement Supported: Not Supported 00:29:24.647 00:29:24.647 Controller Memory Buffer Support 00:29:24.647 ================================ 00:29:24.647 Supported: No 00:29:24.647 00:29:24.647 Persistent Memory Region Support 00:29:24.647 ================================ 00:29:24.647 Supported: No 00:29:24.647 00:29:24.647 Admin Command Set Attributes 00:29:24.647 ============================ 00:29:24.647 Security Send/Receive: Not Supported 00:29:24.647 Format NVM: Not Supported 00:29:24.647 Firmware Activate/Download: Not Supported 00:29:24.647 Namespace Management: Not Supported 00:29:24.647 Device Self-Test: Not Supported 00:29:24.647 Directives: Not Supported 00:29:24.647 NVMe-MI: Not Supported 00:29:24.647 Virtualization Management: Not Supported 00:29:24.647 Doorbell Buffer Config: Not Supported 00:29:24.647 Get LBA Status Capability: Not Supported 00:29:24.647 Command & Feature Lockdown Capability: Not Supported 00:29:24.647 Abort Command Limit: 4 00:29:24.647 Async Event Request Limit: 4 00:29:24.647 Number of Firmware Slots: N/A 00:29:24.647 Firmware Slot 1 Read-Only: N/A 00:29:24.647 Firmware Activation Without Reset: N/A 00:29:24.647 Multiple Update Detection Support: N/A 00:29:24.647 Firmware Update Granularity: No Information Provided 00:29:24.647 Per-Namespace SMART Log: No 00:29:24.647 Asymmetric Namespace Access Log Page: Not Supported 00:29:24.647 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:24.647 Command Effects Log Page: Supported 00:29:24.647 Get Log Page Extended Data: Supported 00:29:24.647 Telemetry Log Pages: Not Supported 00:29:24.647 Persistent Event Log Pages: Not Supported 00:29:24.647 Supported Log Pages Log Page: May Support 00:29:24.647 Commands Supported & Effects Log Page: Not Supported 00:29:24.647 Feature Identifiers & Effects Log Page:May Support 00:29:24.647 NVMe-MI Commands & Effects Log Page: May Support 00:29:24.647 Data Area 4 for Telemetry Log: Not Supported 00:29:24.647 Error Log Page Entries Supported: 128 00:29:24.647 Keep Alive: Supported 00:29:24.647 Keep Alive Granularity: 10000 ms 00:29:24.647 00:29:24.647 NVM Command Set Attributes 00:29:24.647 ========================== 00:29:24.647 Submission Queue Entry Size 00:29:24.647 Max: 64 00:29:24.647 Min: 64 00:29:24.647 Completion Queue Entry Size 00:29:24.647 Max: 16 00:29:24.647 Min: 16 00:29:24.647 Number of Namespaces: 32 00:29:24.647 Compare Command: Supported 00:29:24.647 Write Uncorrectable Command: Not Supported 00:29:24.647 Dataset Management Command: Supported 00:29:24.647 Write Zeroes Command: Supported 00:29:24.647 Set Features Save Field: Not Supported 00:29:24.647 Reservations: Supported 00:29:24.647 Timestamp: Not Supported 00:29:24.647 Copy: Supported 00:29:24.647 Volatile Write Cache: Present 00:29:24.647 Atomic Write Unit (Normal): 1 00:29:24.647 Atomic Write Unit (PFail): 1 00:29:24.647 Atomic Compare & Write Unit: 1 00:29:24.647 Fused Compare & Write: Supported 00:29:24.647 Scatter-Gather List 00:29:24.647 SGL Command Set: Supported 00:29:24.647 SGL Keyed: Supported 00:29:24.647 SGL Bit Bucket Descriptor: Not Supported 00:29:24.647 SGL Metadata Pointer: Not Supported 00:29:24.647 Oversized SGL: Not Supported 00:29:24.647 SGL Metadata Address: Not Supported 00:29:24.647 SGL Offset: Supported 00:29:24.647 Transport SGL Data Block: Not Supported 00:29:24.647 Replay Protected Memory Block: Not Supported 00:29:24.647 00:29:24.647 Firmware Slot Information 00:29:24.647 ========================= 00:29:24.647 Active slot: 1 00:29:24.647 Slot 1 Firmware Revision: 25.01 00:29:24.647 00:29:24.647 00:29:24.648 Commands Supported and Effects 00:29:24.648 ============================== 00:29:24.648 Admin Commands 00:29:24.648 -------------- 00:29:24.648 Get Log Page (02h): Supported 00:29:24.648 Identify (06h): Supported 00:29:24.648 Abort (08h): Supported 00:29:24.648 Set Features (09h): Supported 00:29:24.648 Get Features (0Ah): Supported 00:29:24.648 Asynchronous Event Request (0Ch): Supported 00:29:24.648 Keep Alive (18h): Supported 00:29:24.648 I/O Commands 00:29:24.648 ------------ 00:29:24.648 Flush (00h): Supported LBA-Change 00:29:24.648 Write (01h): Supported LBA-Change 00:29:24.648 Read (02h): Supported 00:29:24.648 Compare (05h): Supported 00:29:24.648 Write Zeroes (08h): Supported LBA-Change 00:29:24.648 Dataset Management (09h): Supported LBA-Change 00:29:24.648 Copy (19h): Supported LBA-Change 00:29:24.648 00:29:24.648 Error Log 00:29:24.648 ========= 00:29:24.648 00:29:24.648 Arbitration 00:29:24.648 =========== 00:29:24.648 Arbitration Burst: 1 00:29:24.648 00:29:24.648 Power Management 00:29:24.648 ================ 00:29:24.648 Number of Power States: 1 00:29:24.648 Current Power State: Power State #0 00:29:24.648 Power State #0: 00:29:24.648 Max Power: 0.00 W 00:29:24.648 Non-Operational State: Operational 00:29:24.648 Entry Latency: Not Reported 00:29:24.648 Exit Latency: Not Reported 00:29:24.648 Relative Read Throughput: 0 00:29:24.648 Relative Read Latency: 0 00:29:24.648 Relative Write Throughput: 0 00:29:24.648 Relative Write Latency: 0 00:29:24.648 Idle Power: Not Reported 00:29:24.648 Active Power: Not Reported 00:29:24.648 Non-Operational Permissive Mode: Not Supported 00:29:24.648 00:29:24.648 Health Information 00:29:24.648 ================== 00:29:24.648 Critical Warnings: 00:29:24.648 Available Spare Space: OK 00:29:24.648 Temperature: OK 00:29:24.648 Device Reliability: OK 00:29:24.648 Read Only: No 00:29:24.648 Volatile Memory Backup: OK 00:29:24.648 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:24.648 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:24.648 Available Spare: 0% 00:29:24.648 Available Spare Threshold: 0% 00:29:24.648 Life Percentage Used:[2024-10-28 05:05:15.007443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.648 [2024-10-28 05:05:15.007457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1577df0) 00:29:24.648 [2024-10-28 05:05:15.007468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.648 [2024-10-28 05:05:15.007490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e64c0, cid 7, qid 0 00:29:24.648 [2024-10-28 05:05:15.007732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.648 [2024-10-28 05:05:15.007749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.648 [2024-10-28 05:05:15.007756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.648 [2024-10-28 05:05:15.007763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e64c0) on tqpair=0x1577df0 00:29:24.648 [2024-10-28 05:05:15.007815] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:24.648 [2024-10-28 05:05:15.007837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5a40) on tqpair=0x1577df0 00:29:24.648 [2024-10-28 05:05:15.007851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.648 [2024-10-28 05:05:15.007860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5bc0) on tqpair=0x1577df0 00:29:24.648 [2024-10-28 05:05:15.007868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.648 [2024-10-28 05:05:15.007876] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5d40) on tqpair=0x1577df0 00:29:24.648 [2024-10-28 05:05:15.007883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.648 [2024-10-28 05:05:15.007891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5ec0) on tqpair=0x1577df0 00:29:24.648 [2024-10-28 05:05:15.007898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.648 [2024-10-28 05:05:15.007911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.648 [2024-10-28 05:05:15.007919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.648 [2024-10-28 05:05:15.007925] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1577df0) 00:29:24.648 [2024-10-28 05:05:15.007950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.648 [2024-10-28 05:05:15.007973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5ec0, cid 3, qid 0 00:29:24.648 [2024-10-28 05:05:15.008136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.648 [2024-10-28 05:05:15.008154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.648 [2024-10-28 05:05:15.008162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.648 [2024-10-28 05:05:15.008169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5ec0) on tqpair=0x1577df0 00:29:24.648 [2024-10-28 05:05:15.008181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.648 [2024-10-28 05:05:15.008188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.648 [2024-10-28 05:05:15.008194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1577df0) 00:29:24.648 [2024-10-28 05:05:15.008205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.648 [2024-10-28 05:05:15.008234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5ec0, cid 3, qid 0 00:29:24.648 [2024-10-28 05:05:15.008355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.648 [2024-10-28 05:05:15.008373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.648 [2024-10-28 05:05:15.008381] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.648 [2024-10-28 05:05:15.008388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5ec0) on tqpair=0x1577df0 00:29:24.648 [2024-10-28 05:05:15.008400] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:24.648 [2024-10-28 05:05:15.008409] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:24.648 [2024-10-28 05:05:15.008427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.648 [2024-10-28 05:05:15.008437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.648 [2024-10-28 05:05:15.008444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1577df0) 00:29:24.648 [2024-10-28 05:05:15.008454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.648 [2024-10-28 05:05:15.008476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5ec0, cid 3, qid 0 00:29:24.648 [2024-10-28 05:05:15.008591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.648 [2024-10-28 05:05:15.008607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.648 [2024-10-28 05:05:15.008614] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.648 [2024-10-28 05:05:15.008620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5ec0) on tqpair=0x1577df0 00:29:24.648 [2024-10-28 05:05:15.012659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.648 [2024-10-28 05:05:15.012675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.648 [2024-10-28 05:05:15.012682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1577df0) 00:29:24.648 [2024-10-28 05:05:15.012693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.648 [2024-10-28 05:05:15.012716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15e5ec0, cid 3, qid 0 00:29:24.648 [2024-10-28 05:05:15.012869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.648 [2024-10-28 05:05:15.012885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.648 [2024-10-28 05:05:15.012892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.648 [2024-10-28 05:05:15.012899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15e5ec0) on tqpair=0x1577df0 00:29:24.648 [2024-10-28 05:05:15.012913] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:29:24.648 0% 00:29:24.648 Data Units Read: 0 00:29:24.648 Data Units Written: 0 00:29:24.648 Host Read Commands: 0 00:29:24.648 Host Write Commands: 0 00:29:24.648 Controller Busy Time: 0 minutes 00:29:24.648 Power Cycles: 0 00:29:24.648 Power On Hours: 0 hours 00:29:24.648 Unsafe Shutdowns: 0 00:29:24.648 Unrecoverable Media Errors: 0 00:29:24.648 Lifetime Error Log Entries: 0 00:29:24.648 Warning Temperature Time: 0 minutes 00:29:24.648 Critical Temperature Time: 0 minutes 00:29:24.648 00:29:24.648 Number of Queues 00:29:24.648 ================ 00:29:24.648 Number of I/O Submission Queues: 127 00:29:24.648 Number of I/O Completion Queues: 127 00:29:24.648 00:29:24.648 Active Namespaces 00:29:24.648 ================= 00:29:24.648 Namespace ID:1 00:29:24.648 Error Recovery Timeout: Unlimited 00:29:24.649 Command Set Identifier: NVM (00h) 00:29:24.649 Deallocate: Supported 00:29:24.649 Deallocated/Unwritten Error: Not Supported 00:29:24.649 Deallocated Read Value: Unknown 00:29:24.649 Deallocate in Write Zeroes: Not Supported 00:29:24.649 Deallocated Guard Field: 0xFFFF 00:29:24.649 Flush: Supported 00:29:24.649 Reservation: Supported 00:29:24.649 Namespace Sharing Capabilities: Multiple Controllers 00:29:24.649 Size (in LBAs): 131072 (0GiB) 00:29:24.649 Capacity (in LBAs): 131072 (0GiB) 00:29:24.649 Utilization (in LBAs): 131072 (0GiB) 00:29:24.649 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:24.649 EUI64: ABCDEF0123456789 00:29:24.649 UUID: d6fb1c29-558b-45f4-b6a5-220fcc4f982d 00:29:24.649 Thin Provisioning: Not Supported 00:29:24.649 Per-NS Atomic Units: Yes 00:29:24.649 Atomic Boundary Size (Normal): 0 00:29:24.649 Atomic Boundary Size (PFail): 0 00:29:24.649 Atomic Boundary Offset: 0 00:29:24.649 Maximum Single Source Range Length: 65535 00:29:24.649 Maximum Copy Length: 65535 00:29:24.649 Maximum Source Range Count: 1 00:29:24.649 NGUID/EUI64 Never Reused: No 00:29:24.649 Namespace Write Protected: No 00:29:24.649 Number of LBA Formats: 1 00:29:24.649 Current LBA Format: LBA Format #00 00:29:24.649 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:24.649 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:24.649 rmmod nvme_tcp 00:29:24.649 rmmod nvme_fabrics 00:29:24.649 rmmod nvme_keyring 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 2415848 ']' 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 2415848 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2415848 ']' 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2415848 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2415848 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2415848' 00:29:24.649 killing process with pid 2415848 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2415848 00:29:24.649 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2415848 00:29:24.908 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:24.908 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:24.908 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:24.908 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:24.908 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:29:24.908 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:24.908 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:29:24.908 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:24.908 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:24.908 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.908 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.908 05:05:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.812 05:05:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:26.813 00:29:26.813 real 0m6.370s 00:29:26.813 user 0m8.166s 00:29:26.813 sys 0m1.917s 00:29:26.813 05:05:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:26.813 05:05:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:26.813 ************************************ 00:29:26.813 END TEST nvmf_identify 00:29:26.813 ************************************ 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.072 ************************************ 00:29:27.072 START TEST nvmf_perf 00:29:27.072 ************************************ 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:27.072 * Looking for test storage... 00:29:27.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1689 -- # lcov --version 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:29:27.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.072 --rc genhtml_branch_coverage=1 00:29:27.072 --rc genhtml_function_coverage=1 00:29:27.072 --rc genhtml_legend=1 00:29:27.072 --rc geninfo_all_blocks=1 00:29:27.072 --rc geninfo_unexecuted_blocks=1 00:29:27.072 00:29:27.072 ' 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:29:27.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.072 --rc genhtml_branch_coverage=1 00:29:27.072 --rc genhtml_function_coverage=1 00:29:27.072 --rc genhtml_legend=1 00:29:27.072 --rc geninfo_all_blocks=1 00:29:27.072 --rc geninfo_unexecuted_blocks=1 00:29:27.072 00:29:27.072 ' 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:29:27.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.072 --rc genhtml_branch_coverage=1 00:29:27.072 --rc genhtml_function_coverage=1 00:29:27.072 --rc genhtml_legend=1 00:29:27.072 --rc geninfo_all_blocks=1 00:29:27.072 --rc geninfo_unexecuted_blocks=1 00:29:27.072 00:29:27.072 ' 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:29:27.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.072 --rc genhtml_branch_coverage=1 00:29:27.072 --rc genhtml_function_coverage=1 00:29:27.072 --rc genhtml_legend=1 00:29:27.072 --rc geninfo_all_blocks=1 00:29:27.072 --rc geninfo_unexecuted_blocks=1 00:29:27.072 00:29:27.072 ' 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.072 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:27.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:27.073 05:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:28.972 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.972 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:28.973 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:28.973 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:28.973 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:28.973 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:29.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:29:29.235 00:29:29.235 --- 10.0.0.2 ping statistics --- 00:29:29.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.235 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:29.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:29:29.235 00:29:29.235 --- 10.0.0.1 ping statistics --- 00:29:29.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.235 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=2418041 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 2418041 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2418041 ']' 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:29.235 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.236 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:29.236 05:05:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:29.236 [2024-10-28 05:05:19.672117] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:29:29.236 [2024-10-28 05:05:19.672220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.236 [2024-10-28 05:05:19.809759] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:29.494 [2024-10-28 05:05:19.845339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:29.494 [2024-10-28 05:05:19.893094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.494 [2024-10-28 05:05:19.893141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.494 [2024-10-28 05:05:19.893169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:29.494 [2024-10-28 05:05:19.893180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:29.494 [2024-10-28 05:05:19.893189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.494 [2024-10-28 05:05:19.894742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.494 [2024-10-28 05:05:19.895354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:29.494 [2024-10-28 05:05:19.895377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:29.494 [2024-10-28 05:05:19.895380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.426 05:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:30.426 05:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:29:30.426 05:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:30.426 05:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:30.426 05:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:30.426 05:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.426 05:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:30.426 05:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:33.706 05:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:33.706 05:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:33.706 05:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:29:33.706 05:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:33.964 05:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:33.964 05:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:29:33.964 05:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:33.964 05:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:33.964 05:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:34.530 [2024-10-28 05:05:24.828175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.530 05:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.789 05:05:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:34.789 05:05:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:35.047 05:05:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:35.047 05:05:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:35.305 05:05:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:35.563 [2024-10-28 05:05:25.929501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.563 05:05:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:35.820 05:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:29:35.820 05:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:35.820 05:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:35.820 05:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:37.192 Initializing NVMe Controllers 00:29:37.192 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:29:37.192 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:29:37.193 Initialization complete. Launching workers. 00:29:37.193 ======================================================== 00:29:37.193 Latency(us) 00:29:37.193 Device Information : IOPS MiB/s Average min max 00:29:37.193 PCIE (0000:88:00.0) NSID 1 from core 0: 85204.78 332.83 374.84 42.91 4319.68 00:29:37.193 ======================================================== 00:29:37.193 Total : 85204.78 332.83 374.84 42.91 4319.68 00:29:37.193 00:29:37.193 05:05:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.565 Initializing NVMe Controllers 00:29:38.565 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:38.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:38.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:38.565 Initialization complete. Launching workers. 00:29:38.565 ======================================================== 00:29:38.565 Latency(us) 00:29:38.565 Device Information : IOPS MiB/s Average min max 00:29:38.565 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 62.88 0.25 16318.91 171.54 45832.98 00:29:38.565 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.89 0.22 18033.93 7012.10 48023.52 00:29:38.565 ======================================================== 00:29:38.565 Total : 118.76 0.46 17125.98 171.54 48023.52 00:29:38.565 00:29:38.565 05:05:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:40.465 Initializing NVMe Controllers 00:29:40.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:40.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:40.465 Initialization complete. Launching workers. 00:29:40.465 ======================================================== 00:29:40.465 Latency(us) 00:29:40.465 Device Information : IOPS MiB/s Average min max 00:29:40.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8145.07 31.82 3929.72 553.25 10614.17 00:29:40.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3856.67 15.07 8298.96 5323.06 15693.64 00:29:40.466 ======================================================== 00:29:40.466 Total : 12001.74 46.88 5333.74 553.25 15693.64 00:29:40.466 00:29:40.466 05:05:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:40.466 05:05:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:40.466 05:05:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:42.997 Initializing NVMe Controllers 00:29:42.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:42.997 Controller IO queue size 128, less than required. 00:29:42.997 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:42.997 Controller IO queue size 128, less than required. 00:29:42.997 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:42.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:42.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:42.997 Initialization complete. Launching workers. 00:29:42.997 ======================================================== 00:29:42.997 Latency(us) 00:29:42.997 Device Information : IOPS MiB/s Average min max 00:29:42.997 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1399.60 349.90 92781.23 44226.64 136795.52 00:29:42.997 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 589.83 147.46 225678.17 111226.53 351807.59 00:29:42.997 ======================================================== 00:29:42.997 Total : 1989.44 497.36 132182.84 44226.64 351807.59 00:29:42.997 00:29:42.997 05:05:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:43.562 No valid NVMe controllers or AIO or URING devices found 00:29:43.562 Initializing NVMe Controllers 00:29:43.562 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:43.562 Controller IO queue size 128, less than required. 00:29:43.562 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.562 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:43.563 Controller IO queue size 128, less than required. 00:29:43.563 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.563 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:43.563 WARNING: Some requested NVMe devices were skipped 00:29:43.563 05:05:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:46.151 Initializing NVMe Controllers 00:29:46.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:46.151 Controller IO queue size 128, less than required. 00:29:46.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.151 Controller IO queue size 128, less than required. 00:29:46.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:46.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:46.151 Initialization complete. Launching workers. 00:29:46.151 00:29:46.151 ==================== 00:29:46.151 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:46.151 TCP transport: 00:29:46.151 polls: 14712 00:29:46.151 idle_polls: 9329 00:29:46.151 sock_completions: 5383 00:29:46.151 nvme_completions: 5623 00:29:46.151 submitted_requests: 8528 00:29:46.151 queued_requests: 1 00:29:46.151 00:29:46.151 ==================== 00:29:46.151 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:46.151 TCP transport: 00:29:46.151 polls: 14769 00:29:46.151 idle_polls: 8870 00:29:46.151 sock_completions: 5899 00:29:46.151 nvme_completions: 5857 00:29:46.151 submitted_requests: 8716 00:29:46.151 queued_requests: 1 00:29:46.151 ======================================================== 00:29:46.151 Latency(us) 00:29:46.151 Device Information : IOPS MiB/s Average min max 00:29:46.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1405.50 351.37 93940.02 61177.79 144816.36 00:29:46.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1464.00 366.00 88077.73 41973.36 134021.34 00:29:46.151 ======================================================== 00:29:46.151 Total : 2869.50 717.37 90949.12 41973.36 144816.36 00:29:46.151 00:29:46.409 05:05:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:46.409 05:05:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:46.667 05:05:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:46.667 05:05:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:46.667 05:05:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:49.949 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=469ef71d-1bee-4e2a-9650-3970beedea78 00:29:49.949 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 469ef71d-1bee-4e2a-9650-3970beedea78 00:29:49.949 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=469ef71d-1bee-4e2a-9650-3970beedea78 00:29:49.949 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:49.949 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:49.949 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:49.949 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:50.207 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:50.207 { 00:29:50.207 "uuid": "469ef71d-1bee-4e2a-9650-3970beedea78", 00:29:50.207 "name": "lvs_0", 00:29:50.207 "base_bdev": "Nvme0n1", 00:29:50.207 "total_data_clusters": 238234, 00:29:50.207 "free_clusters": 238234, 00:29:50.207 "block_size": 512, 00:29:50.207 "cluster_size": 4194304 00:29:50.207 } 00:29:50.207 ]' 00:29:50.207 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="469ef71d-1bee-4e2a-9650-3970beedea78") .free_clusters' 00:29:50.207 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:29:50.207 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="469ef71d-1bee-4e2a-9650-3970beedea78") .cluster_size' 00:29:50.207 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:50.207 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:29:50.207 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:29:50.207 952936 00:29:50.207 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:50.207 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:50.207 05:05:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 469ef71d-1bee-4e2a-9650-3970beedea78 lbd_0 20480 00:29:50.772 05:05:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=ad836bc3-e5ce-432f-a2cc-d74019edf1f1 00:29:51.030 05:05:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore ad836bc3-e5ce-432f-a2cc-d74019edf1f1 lvs_n_0 00:29:51.596 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=aa631439-71c0-49c0-ab07-f4a13e0235c4 00:29:51.596 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb aa631439-71c0-49c0-ab07-f4a13e0235c4 00:29:51.596 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=aa631439-71c0-49c0-ab07-f4a13e0235c4 00:29:51.596 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:51.596 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:51.596 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:51.596 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:51.854 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:51.854 { 00:29:51.854 "uuid": "469ef71d-1bee-4e2a-9650-3970beedea78", 00:29:51.854 "name": "lvs_0", 00:29:51.854 "base_bdev": "Nvme0n1", 00:29:51.854 "total_data_clusters": 238234, 00:29:51.854 "free_clusters": 233114, 00:29:51.854 "block_size": 512, 00:29:51.854 "cluster_size": 4194304 00:29:51.854 }, 00:29:51.854 { 00:29:51.854 "uuid": "aa631439-71c0-49c0-ab07-f4a13e0235c4", 00:29:51.854 "name": "lvs_n_0", 00:29:51.854 "base_bdev": "ad836bc3-e5ce-432f-a2cc-d74019edf1f1", 00:29:51.854 "total_data_clusters": 5114, 00:29:51.854 "free_clusters": 5114, 00:29:51.854 "block_size": 512, 00:29:51.854 "cluster_size": 4194304 00:29:51.854 } 00:29:51.854 ]' 00:29:51.854 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="aa631439-71c0-49c0-ab07-f4a13e0235c4") .free_clusters' 00:29:51.854 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:29:51.854 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="aa631439-71c0-49c0-ab07-f4a13e0235c4") .cluster_size' 00:29:52.111 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:52.111 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:29:52.111 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:29:52.111 20456 00:29:52.111 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:52.111 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aa631439-71c0-49c0-ab07-f4a13e0235c4 lbd_nest_0 20456 00:29:52.369 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=bf66c7e0-ea2b-48e6-b7d3-36d507b528b0 00:29:52.369 05:05:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:52.627 05:05:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:52.627 05:05:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bf66c7e0-ea2b-48e6-b7d3-36d507b528b0 00:29:52.885 05:05:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.142 05:05:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:53.142 05:05:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:53.142 05:05:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:53.142 05:05:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:53.142 05:05:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:05.336 Initializing NVMe Controllers 00:30:05.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:05.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:05.336 Initialization complete. Launching workers. 00:30:05.336 ======================================================== 00:30:05.336 Latency(us) 00:30:05.336 Device Information : IOPS MiB/s Average min max 00:30:05.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.89 0.02 22353.18 197.64 48005.09 00:30:05.336 ======================================================== 00:30:05.336 Total : 44.89 0.02 22353.18 197.64 48005.09 00:30:05.336 00:30:05.336 05:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:05.336 05:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:15.296 Initializing NVMe Controllers 00:30:15.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:15.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:15.296 Initialization complete. Launching workers. 00:30:15.296 ======================================================== 00:30:15.296 Latency(us) 00:30:15.296 Device Information : IOPS MiB/s Average min max 00:30:15.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 84.00 10.50 11910.91 4995.17 50997.95 00:30:15.296 ======================================================== 00:30:15.296 Total : 84.00 10.50 11910.91 4995.17 50997.95 00:30:15.296 00:30:15.296 05:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:15.296 05:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:15.296 05:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:25.260 Initializing NVMe Controllers 00:30:25.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:25.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:25.260 Initialization complete. Launching workers. 00:30:25.260 ======================================================== 00:30:25.260 Latency(us) 00:30:25.260 Device Information : IOPS MiB/s Average min max 00:30:25.260 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7251.93 3.54 4420.51 477.81 47944.78 00:30:25.260 ======================================================== 00:30:25.260 Total : 7251.93 3.54 4420.51 477.81 47944.78 00:30:25.260 00:30:25.260 05:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:25.260 05:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:35.229 Initializing NVMe Controllers 00:30:35.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:35.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:35.229 Initialization complete. Launching workers. 00:30:35.229 ======================================================== 00:30:35.229 Latency(us) 00:30:35.229 Device Information : IOPS MiB/s Average min max 00:30:35.229 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2925.74 365.72 10936.35 1138.10 24072.56 00:30:35.229 ======================================================== 00:30:35.229 Total : 2925.74 365.72 10936.35 1138.10 24072.56 00:30:35.229 00:30:35.229 05:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:35.229 05:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:35.229 05:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:47.422 Initializing NVMe Controllers 00:30:47.422 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:47.422 Controller IO queue size 128, less than required. 00:30:47.422 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:47.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:47.422 Initialization complete. Launching workers. 00:30:47.422 ======================================================== 00:30:47.422 Latency(us) 00:30:47.422 Device Information : IOPS MiB/s Average min max 00:30:47.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11688.37 5.71 10953.94 1767.77 25493.94 00:30:47.422 ======================================================== 00:30:47.422 Total : 11688.37 5.71 10953.94 1767.77 25493.94 00:30:47.422 00:30:47.422 05:06:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:47.422 05:06:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:57.466 Initializing NVMe Controllers 00:30:57.466 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:57.466 Controller IO queue size 128, less than required. 00:30:57.466 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:57.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:57.466 Initialization complete. Launching workers. 00:30:57.466 ======================================================== 00:30:57.466 Latency(us) 00:30:57.466 Device Information : IOPS MiB/s Average min max 00:30:57.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1160.89 145.11 110937.30 9706.84 247037.11 00:30:57.466 ======================================================== 00:30:57.466 Total : 1160.89 145.11 110937.30 9706.84 247037.11 00:30:57.466 00:30:57.466 05:06:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:57.466 05:06:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bf66c7e0-ea2b-48e6-b7d3-36d507b528b0 00:30:57.466 05:06:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:57.466 05:06:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ad836bc3-e5ce-432f-a2cc-d74019edf1f1 00:30:57.725 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:57.983 rmmod nvme_tcp 00:30:57.983 rmmod nvme_fabrics 00:30:57.983 rmmod nvme_keyring 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 2418041 ']' 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 2418041 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2418041 ']' 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2418041 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2418041 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2418041' 00:30:57.983 killing process with pid 2418041 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2418041 00:30:57.983 05:06:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2418041 00:30:59.882 05:06:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:59.882 05:06:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:59.882 05:06:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:59.882 05:06:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:30:59.882 05:06:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:30:59.882 05:06:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:59.882 05:06:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:30:59.882 05:06:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:59.882 05:06:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:59.882 05:06:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.882 05:06:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.882 05:06:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.783 05:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:01.783 00:31:01.783 real 1m34.747s 00:31:01.784 user 5m47.023s 00:31:01.784 sys 0m17.706s 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:01.784 ************************************ 00:31:01.784 END TEST nvmf_perf 00:31:01.784 ************************************ 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.784 ************************************ 00:31:01.784 START TEST nvmf_fio_host 00:31:01.784 ************************************ 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:01.784 * Looking for test storage... 00:31:01.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1689 -- # lcov --version 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:01.784 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:31:02.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.043 --rc genhtml_branch_coverage=1 00:31:02.043 --rc genhtml_function_coverage=1 00:31:02.043 --rc genhtml_legend=1 00:31:02.043 --rc geninfo_all_blocks=1 00:31:02.043 --rc geninfo_unexecuted_blocks=1 00:31:02.043 00:31:02.043 ' 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:31:02.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.043 --rc genhtml_branch_coverage=1 00:31:02.043 --rc genhtml_function_coverage=1 00:31:02.043 --rc genhtml_legend=1 00:31:02.043 --rc geninfo_all_blocks=1 00:31:02.043 --rc geninfo_unexecuted_blocks=1 00:31:02.043 00:31:02.043 ' 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:31:02.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.043 --rc genhtml_branch_coverage=1 00:31:02.043 --rc genhtml_function_coverage=1 00:31:02.043 --rc genhtml_legend=1 00:31:02.043 --rc geninfo_all_blocks=1 00:31:02.043 --rc geninfo_unexecuted_blocks=1 00:31:02.043 00:31:02.043 ' 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:31:02.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.043 --rc genhtml_branch_coverage=1 00:31:02.043 --rc genhtml_function_coverage=1 00:31:02.043 --rc genhtml_legend=1 00:31:02.043 --rc geninfo_all_blocks=1 00:31:02.043 --rc geninfo_unexecuted_blocks=1 00:31:02.043 00:31:02.043 ' 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.043 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:02.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:02.044 05:06:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:03.947 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:03.947 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:03.947 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:03.948 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:03.948 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:03.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:03.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:31:03.948 00:31:03.948 --- 10.0.0.2 ping statistics --- 00:31:03.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:03.948 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:03.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:03.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:31:03.948 00:31:03.948 --- 10.0.0.1 ping statistics --- 00:31:03.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:03.948 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2430765 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2430765 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2430765 ']' 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:03.948 05:06:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.207 [2024-10-28 05:06:54.578689] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:31:04.207 [2024-10-28 05:06:54.578785] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.207 [2024-10-28 05:06:54.718463] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:04.207 [2024-10-28 05:06:54.761158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:04.465 [2024-10-28 05:06:54.812896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.465 [2024-10-28 05:06:54.812959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.465 [2024-10-28 05:06:54.812976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.465 [2024-10-28 05:06:54.812989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.465 [2024-10-28 05:06:54.813001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.465 [2024-10-28 05:06:54.814734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.465 [2024-10-28 05:06:54.814818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:04.465 [2024-10-28 05:06:54.814911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:04.465 [2024-10-28 05:06:54.814914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.030 05:06:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:05.030 05:06:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:31:05.030 05:06:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:05.288 [2024-10-28 05:06:55.828454] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.288 05:06:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:05.288 05:06:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:05.288 05:06:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.288 05:06:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:05.854 Malloc1 00:31:05.854 05:06:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:05.854 05:06:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:06.419 05:06:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.419 [2024-10-28 05:06:56.971770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.419 05:06:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:06.677 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:06.677 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:06.677 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:06.677 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:06.677 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:06.677 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:06.677 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:06.677 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:06.677 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:06.677 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:06.677 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:06.677 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:06.677 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:06.934 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:06.934 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:06.934 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:06.934 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:06.934 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:06.934 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:06.934 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:06.934 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:06.934 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:06.934 05:06:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:06.934 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:06.934 fio-3.35 00:31:06.934 Starting 1 thread 00:31:10.214 00:31:10.214 test: (groupid=0, jobs=1): err= 0: pid=2431258: Mon Oct 28 05:07:00 2024 00:31:10.214 read: IOPS=8637, BW=33.7MiB/s (35.4MB/s)(67.7MiB/2007msec) 00:31:10.214 slat (nsec): min=1930, max=119156, avg=2508.84, stdev=1461.74 00:31:10.214 clat (usec): min=2303, max=14595, avg=8145.62, stdev=652.21 00:31:10.214 lat (usec): min=2331, max=14598, avg=8148.13, stdev=652.12 00:31:10.214 clat percentiles (usec): 00:31:10.214 | 1.00th=[ 6652], 5.00th=[ 7177], 10.00th=[ 7373], 20.00th=[ 7635], 00:31:10.214 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8291], 00:31:10.214 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 9110], 00:31:10.214 | 99.00th=[ 9503], 99.50th=[ 9765], 99.90th=[13173], 99.95th=[13566], 00:31:10.214 | 99.99th=[14091] 00:31:10.214 bw ( KiB/s): min=33112, max=35216, per=99.93%, avg=34526.00, stdev=955.29, samples=4 00:31:10.214 iops : min= 8278, max= 8802, avg=8631.00, stdev=238.34, samples=4 00:31:10.214 write: IOPS=8633, BW=33.7MiB/s (35.4MB/s)(67.7MiB/2007msec); 0 zone resets 00:31:10.214 slat (nsec): min=1990, max=83613, avg=2593.97, stdev=1043.50 00:31:10.214 clat (usec): min=1038, max=13449, avg=6577.81, stdev=552.71 00:31:10.214 lat (usec): min=1044, max=13451, avg=6580.40, stdev=552.66 00:31:10.214 clat percentiles (usec): 00:31:10.214 | 1.00th=[ 5342], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6194], 00:31:10.214 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:31:10.214 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7373], 00:31:10.214 | 99.00th=[ 7767], 99.50th=[ 7898], 99.90th=[11338], 99.95th=[12387], 00:31:10.214 | 99.99th=[13435] 00:31:10.214 bw ( KiB/s): min=34120, max=34832, per=100.00%, avg=34546.00, stdev=317.21, samples=4 00:31:10.214 iops : min= 8530, max= 8708, avg=8636.50, stdev=79.30, samples=4 00:31:10.214 lat (msec) : 2=0.03%, 4=0.12%, 10=99.67%, 20=0.19% 00:31:10.214 cpu : usr=61.47%, sys=34.85%, ctx=91, majf=0, minf=7 00:31:10.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:10.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:10.214 issued rwts: total=17335,17328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.214 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:10.214 00:31:10.214 Run status group 0 (all jobs): 00:31:10.214 READ: bw=33.7MiB/s (35.4MB/s), 33.7MiB/s-33.7MiB/s (35.4MB/s-35.4MB/s), io=67.7MiB (71.0MB), run=2007-2007msec 00:31:10.214 WRITE: bw=33.7MiB/s (35.4MB/s), 33.7MiB/s-33.7MiB/s (35.4MB/s-35.4MB/s), io=67.7MiB (71.0MB), run=2007-2007msec 00:31:10.214 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:10.214 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:10.214 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:10.214 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:10.214 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:10.214 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:10.214 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:10.214 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:10.214 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.215 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:10.215 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:10.215 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:10.215 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:10.215 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:10.215 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.215 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:10.215 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:10.215 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:10.215 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:10.215 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:10.215 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:10.215 05:07:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:10.215 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:10.215 fio-3.35 00:31:10.215 Starting 1 thread 00:31:12.115 00:31:12.115 test: (groupid=0, jobs=1): err= 0: pid=2431686: Mon Oct 28 05:07:02 2024 00:31:12.115 read: IOPS=8209, BW=128MiB/s (135MB/s)(257MiB/2007msec) 00:31:12.115 slat (nsec): min=2966, max=99662, avg=3773.77, stdev=1681.55 00:31:12.115 clat (usec): min=3018, max=17068, avg=8980.26, stdev=2155.12 00:31:12.115 lat (usec): min=3022, max=17074, avg=8984.04, stdev=2155.22 00:31:12.115 clat percentiles (usec): 00:31:12.115 | 1.00th=[ 4817], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 7046], 00:31:12.115 | 30.00th=[ 7635], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9503], 00:31:12.115 | 70.00th=[10159], 80.00th=[10945], 90.00th=[11863], 95.00th=[12518], 00:31:12.115 | 99.00th=[14484], 99.50th=[15270], 99.90th=[16188], 99.95th=[16319], 00:31:12.115 | 99.99th=[16909] 00:31:12.115 bw ( KiB/s): min=59456, max=77760, per=52.56%, avg=69032.00, stdev=7525.53, samples=4 00:31:12.115 iops : min= 3716, max= 4860, avg=4314.50, stdev=470.35, samples=4 00:31:12.115 write: IOPS=4784, BW=74.8MiB/s (78.4MB/s)(140MiB/1876msec); 0 zone resets 00:31:12.115 slat (usec): min=30, max=212, avg=33.68, stdev= 5.78 00:31:12.115 clat (usec): min=5268, max=21222, avg=11391.50, stdev=2137.03 00:31:12.115 lat (usec): min=5300, max=21254, avg=11425.18, stdev=2137.51 00:31:12.115 clat percentiles (usec): 00:31:12.115 | 1.00th=[ 7242], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9634], 00:31:12.115 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11600], 00:31:12.115 | 70.00th=[12387], 80.00th=[13304], 90.00th=[14353], 95.00th=[15139], 00:31:12.115 | 99.00th=[16581], 99.50th=[17171], 99.90th=[19792], 99.95th=[20841], 00:31:12.115 | 99.99th=[21103] 00:31:12.115 bw ( KiB/s): min=63296, max=79008, per=93.27%, avg=71400.00, stdev=6456.62, samples=4 00:31:12.115 iops : min= 3956, max= 4938, avg=4462.50, stdev=403.54, samples=4 00:31:12.115 lat (msec) : 4=0.05%, 10=53.41%, 20=46.51%, 50=0.03% 00:31:12.115 cpu : usr=74.78%, sys=22.68%, ctx=40, majf=0, minf=3 00:31:12.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:12.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:12.115 issued rwts: total=16476,8976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.115 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:12.115 00:31:12.115 Run status group 0 (all jobs): 00:31:12.115 READ: bw=128MiB/s (135MB/s), 128MiB/s-128MiB/s (135MB/s-135MB/s), io=257MiB (270MB), run=2007-2007msec 00:31:12.115 WRITE: bw=74.8MiB/s (78.4MB/s), 74.8MiB/s-74.8MiB/s (78.4MB/s-78.4MB/s), io=140MiB (147MB), run=1876-1876msec 00:31:12.373 05:07:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:12.636 05:07:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:12.636 05:07:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:12.636 05:07:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:12.636 05:07:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1494 -- # bdfs=() 00:31:12.636 05:07:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1494 -- # local bdfs 00:31:12.636 05:07:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:12.636 05:07:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1495 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:12.636 05:07:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:31:12.636 05:07:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # (( 1 == 0 )) 00:31:12.636 05:07:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:88:00.0 00:31:12.636 05:07:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:15.917 Nvme0n1 00:31:15.917 05:07:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:19.200 05:07:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=2e28972a-5160-4c2f-a74d-18950ef2f88e 00:31:19.200 05:07:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 2e28972a-5160-4c2f-a74d-18950ef2f88e 00:31:19.200 05:07:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=2e28972a-5160-4c2f-a74d-18950ef2f88e 00:31:19.200 05:07:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:19.200 05:07:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:19.200 05:07:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:19.200 05:07:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:19.200 05:07:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:19.200 { 00:31:19.200 "uuid": "2e28972a-5160-4c2f-a74d-18950ef2f88e", 00:31:19.200 "name": "lvs_0", 00:31:19.200 "base_bdev": "Nvme0n1", 00:31:19.200 "total_data_clusters": 930, 00:31:19.200 "free_clusters": 930, 00:31:19.200 "block_size": 512, 00:31:19.200 "cluster_size": 1073741824 00:31:19.200 } 00:31:19.200 ]' 00:31:19.200 05:07:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="2e28972a-5160-4c2f-a74d-18950ef2f88e") .free_clusters' 00:31:19.200 05:07:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:31:19.200 05:07:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="2e28972a-5160-4c2f-a74d-18950ef2f88e") .cluster_size' 00:31:19.200 05:07:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:31:19.200 05:07:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:31:19.200 05:07:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:31:19.200 952320 00:31:19.200 05:07:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:19.462 e0bb43db-8dc0-41ea-8e81-4e085b23c3f9 00:31:19.462 05:07:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:19.719 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:19.976 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:20.234 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:20.234 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:20.234 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:20.234 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:20.234 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:20.234 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:20.234 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:20.234 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:20.234 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:20.234 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:20.234 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:20.234 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:20.493 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:20.493 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:20.493 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:20.493 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:20.493 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:20.493 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:20.493 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:20.493 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:20.493 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:20.493 05:07:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:20.493 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:20.493 fio-3.35 00:31:20.493 Starting 1 thread 00:31:23.024 00:31:23.024 test: (groupid=0, jobs=1): err= 0: pid=2432949: Mon Oct 28 05:07:13 2024 00:31:23.024 read: IOPS=5888, BW=23.0MiB/s (24.1MB/s)(46.2MiB/2008msec) 00:31:23.024 slat (usec): min=2, max=155, avg= 2.68, stdev= 2.08 00:31:23.024 clat (usec): min=1334, max=171256, avg=11975.20, stdev=11724.66 00:31:23.024 lat (usec): min=1337, max=171296, avg=11977.88, stdev=11724.99 00:31:23.024 clat percentiles (msec): 00:31:23.024 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:31:23.024 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:31:23.024 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:31:23.024 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:31:23.024 | 99.99th=[ 171] 00:31:23.024 bw ( KiB/s): min=16728, max=25872, per=99.81%, avg=23510.00, stdev=4522.96, samples=4 00:31:23.024 iops : min= 4182, max= 6468, avg=5877.50, stdev=1130.74, samples=4 00:31:23.024 write: IOPS=5879, BW=23.0MiB/s (24.1MB/s)(46.1MiB/2008msec); 0 zone resets 00:31:23.024 slat (usec): min=2, max=146, avg= 2.77, stdev= 1.91 00:31:23.024 clat (usec): min=348, max=169464, avg=9663.46, stdev=11015.72 00:31:23.024 lat (usec): min=352, max=169469, avg=9666.23, stdev=11016.05 00:31:23.024 clat percentiles (msec): 00:31:23.024 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:31:23.024 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:31:23.024 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:31:23.024 | 99.00th=[ 11], 99.50th=[ 18], 99.90th=[ 169], 99.95th=[ 169], 00:31:23.024 | 99.99th=[ 169] 00:31:23.024 bw ( KiB/s): min=17704, max=25536, per=99.91%, avg=23498.00, stdev=3864.20, samples=4 00:31:23.024 iops : min= 4426, max= 6384, avg=5874.50, stdev=966.05, samples=4 00:31:23.024 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:23.024 lat (msec) : 2=0.03%, 4=0.13%, 10=51.55%, 20=47.73%, 250=0.54% 00:31:23.024 cpu : usr=58.74%, sys=38.22%, ctx=88, majf=0, minf=35 00:31:23.024 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:23.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:23.024 issued rwts: total=11824,11807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.024 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:23.024 00:31:23.024 Run status group 0 (all jobs): 00:31:23.024 READ: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.2MiB (48.4MB), run=2008-2008msec 00:31:23.024 WRITE: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.1MiB (48.4MB), run=2008-2008msec 00:31:23.024 05:07:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:23.282 05:07:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:24.657 05:07:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=52ae116f-3247-4ee4-bb57-c2dc7e37d2d7 00:31:24.657 05:07:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 52ae116f-3247-4ee4-bb57-c2dc7e37d2d7 00:31:24.657 05:07:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=52ae116f-3247-4ee4-bb57-c2dc7e37d2d7 00:31:24.657 05:07:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:24.657 05:07:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:24.657 05:07:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:24.657 05:07:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:24.657 05:07:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:24.657 { 00:31:24.657 "uuid": "2e28972a-5160-4c2f-a74d-18950ef2f88e", 00:31:24.657 "name": "lvs_0", 00:31:24.657 "base_bdev": "Nvme0n1", 00:31:24.657 "total_data_clusters": 930, 00:31:24.657 "free_clusters": 0, 00:31:24.657 "block_size": 512, 00:31:24.657 "cluster_size": 1073741824 00:31:24.657 }, 00:31:24.657 { 00:31:24.657 "uuid": "52ae116f-3247-4ee4-bb57-c2dc7e37d2d7", 00:31:24.657 "name": "lvs_n_0", 00:31:24.657 "base_bdev": "e0bb43db-8dc0-41ea-8e81-4e085b23c3f9", 00:31:24.657 "total_data_clusters": 237847, 00:31:24.657 "free_clusters": 237847, 00:31:24.657 "block_size": 512, 00:31:24.657 "cluster_size": 4194304 00:31:24.657 } 00:31:24.657 ]' 00:31:24.657 05:07:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="52ae116f-3247-4ee4-bb57-c2dc7e37d2d7") .free_clusters' 00:31:24.657 05:07:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:31:24.657 05:07:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="52ae116f-3247-4ee4-bb57-c2dc7e37d2d7") .cluster_size' 00:31:24.914 05:07:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:24.914 05:07:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:31:24.914 05:07:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:31:24.914 951388 00:31:24.914 05:07:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:25.480 065ed5fd-6a0f-435c-a125-943568378a84 00:31:25.480 05:07:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:25.738 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:25.997 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:26.291 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:26.291 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:26.291 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:26.291 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:26.291 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:26.291 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:26.291 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:26.291 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:26.291 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.291 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:26.291 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:26.291 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:26.291 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:26.291 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:26.291 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.291 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:26.292 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:26.292 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:26.292 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:26.292 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:26.292 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:26.292 05:07:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:26.575 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:26.575 fio-3.35 00:31:26.575 Starting 1 thread 00:31:29.104 00:31:29.104 test: (groupid=0, jobs=1): err= 0: pid=2433789: Mon Oct 28 05:07:19 2024 00:31:29.104 read: IOPS=5690, BW=22.2MiB/s (23.3MB/s)(44.7MiB/2009msec) 00:31:29.104 slat (nsec): min=1955, max=393061, avg=2628.47, stdev=4209.98 00:31:29.104 clat (usec): min=4407, max=21228, avg=12352.38, stdev=1090.87 00:31:29.104 lat (usec): min=4418, max=21230, avg=12355.01, stdev=1090.70 00:31:29.104 clat percentiles (usec): 00:31:29.104 | 1.00th=[ 9765], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:31:29.104 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:31:29.104 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[14091], 00:31:29.104 | 99.00th=[14615], 99.50th=[14877], 99.90th=[17957], 99.95th=[19792], 00:31:29.104 | 99.99th=[21103] 00:31:29.104 bw ( KiB/s): min=21504, max=23320, per=99.97%, avg=22754.00, stdev=843.65, samples=4 00:31:29.104 iops : min= 5376, max= 5830, avg=5688.50, stdev=210.91, samples=4 00:31:29.104 write: IOPS=5670, BW=22.2MiB/s (23.2MB/s)(44.5MiB/2009msec); 0 zone resets 00:31:29.104 slat (usec): min=2, max=136, avg= 2.63, stdev= 1.68 00:31:29.104 clat (usec): min=2199, max=19613, avg=9989.07, stdev=944.12 00:31:29.104 lat (usec): min=2209, max=19616, avg=9991.69, stdev=944.04 00:31:29.104 clat percentiles (usec): 00:31:29.104 | 1.00th=[ 7898], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:31:29.104 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:31:29.104 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:31:29.104 | 99.00th=[12125], 99.50th=[12387], 99.90th=[16581], 99.95th=[18220], 00:31:29.104 | 99.99th=[19530] 00:31:29.104 bw ( KiB/s): min=22528, max=22912, per=99.82%, avg=22642.00, stdev=182.24, samples=4 00:31:29.104 iops : min= 5632, max= 5728, avg=5660.50, stdev=45.56, samples=4 00:31:29.104 lat (msec) : 4=0.04%, 10=25.99%, 20=73.95%, 50=0.02% 00:31:29.104 cpu : usr=59.56%, sys=37.30%, ctx=105, majf=0, minf=35 00:31:29.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:29.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:29.104 issued rwts: total=11432,11393,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:29.104 00:31:29.104 Run status group 0 (all jobs): 00:31:29.105 READ: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=44.7MiB (46.8MB), run=2009-2009msec 00:31:29.105 WRITE: bw=22.2MiB/s (23.2MB/s), 22.2MiB/s-22.2MiB/s (23.2MB/s-23.2MB/s), io=44.5MiB (46.7MB), run=2009-2009msec 00:31:29.105 05:07:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:29.362 05:07:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:29.362 05:07:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:33.542 05:07:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:33.542 05:07:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:36.822 05:07:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:36.822 05:07:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:39.349 rmmod nvme_tcp 00:31:39.349 rmmod nvme_fabrics 00:31:39.349 rmmod nvme_keyring 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 2430765 ']' 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 2430765 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2430765 ']' 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2430765 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2430765 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2430765' 00:31:39.349 killing process with pid 2430765 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2430765 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2430765 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.349 05:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.251 05:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:41.251 00:31:41.251 real 0m39.509s 00:31:41.251 user 2m32.474s 00:31:41.251 sys 0m7.257s 00:31:41.251 05:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:41.251 05:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.251 ************************************ 00:31:41.251 END TEST nvmf_fio_host 00:31:41.251 ************************************ 00:31:41.251 05:07:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:41.251 05:07:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:41.251 05:07:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:41.251 05:07:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.251 ************************************ 00:31:41.251 START TEST nvmf_failover 00:31:41.251 ************************************ 00:31:41.251 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:41.511 * Looking for test storage... 00:31:41.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1689 -- # lcov --version 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:31:41.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.511 --rc genhtml_branch_coverage=1 00:31:41.511 --rc genhtml_function_coverage=1 00:31:41.511 --rc genhtml_legend=1 00:31:41.511 --rc geninfo_all_blocks=1 00:31:41.511 --rc geninfo_unexecuted_blocks=1 00:31:41.511 00:31:41.511 ' 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:31:41.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.511 --rc genhtml_branch_coverage=1 00:31:41.511 --rc genhtml_function_coverage=1 00:31:41.511 --rc genhtml_legend=1 00:31:41.511 --rc geninfo_all_blocks=1 00:31:41.511 --rc geninfo_unexecuted_blocks=1 00:31:41.511 00:31:41.511 ' 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:31:41.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.511 --rc genhtml_branch_coverage=1 00:31:41.511 --rc genhtml_function_coverage=1 00:31:41.511 --rc genhtml_legend=1 00:31:41.511 --rc geninfo_all_blocks=1 00:31:41.511 --rc geninfo_unexecuted_blocks=1 00:31:41.511 00:31:41.511 ' 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:31:41.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.511 --rc genhtml_branch_coverage=1 00:31:41.511 --rc genhtml_function_coverage=1 00:31:41.511 --rc genhtml_legend=1 00:31:41.511 --rc geninfo_all_blocks=1 00:31:41.511 --rc geninfo_unexecuted_blocks=1 00:31:41.511 00:31:41.511 ' 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:41.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:41.511 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:41.512 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:41.512 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:41.512 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:41.512 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:41.512 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:41.512 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:41.512 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:41.512 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:41.512 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:41.512 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.512 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:41.512 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.512 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:41.512 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:41.512 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:41.512 05:07:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:43.416 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.416 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:43.416 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:43.417 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:43.417 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:43.417 05:07:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:43.676 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:43.676 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:43.676 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:43.676 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:43.676 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:43.676 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:43.676 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:43.676 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:43.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:31:43.676 00:31:43.676 --- 10.0.0.2 ping statistics --- 00:31:43.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.676 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:31:43.676 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:43.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:31:43.676 00:31:43.676 --- 10.0.0.1 ping statistics --- 00:31:43.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.676 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:31:43.676 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=2437061 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 2437061 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2437061 ']' 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:43.677 05:07:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:43.677 [2024-10-28 05:07:34.179068] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:31:43.677 [2024-10-28 05:07:34.179164] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:43.935 [2024-10-28 05:07:34.320534] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:43.935 [2024-10-28 05:07:34.356867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:43.935 [2024-10-28 05:07:34.406366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:43.935 [2024-10-28 05:07:34.406420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:43.935 [2024-10-28 05:07:34.406448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:43.935 [2024-10-28 05:07:34.406464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:43.935 [2024-10-28 05:07:34.406475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:43.935 [2024-10-28 05:07:34.408096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:43.935 [2024-10-28 05:07:34.408151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:43.935 [2024-10-28 05:07:34.408153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.870 05:07:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:44.870 05:07:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:44.870 05:07:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:44.870 05:07:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:44.870 05:07:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:44.870 05:07:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:44.870 05:07:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:45.128 [2024-10-28 05:07:35.517629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.128 05:07:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:45.387 Malloc0 00:31:45.387 05:07:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:45.644 05:07:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:45.908 05:07:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.168 [2024-10-28 05:07:36.630850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.168 05:07:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:46.425 [2024-10-28 05:07:36.903021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:46.425 05:07:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:46.683 [2024-10-28 05:07:37.167171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:46.683 05:07:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2437411 00:31:46.683 05:07:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:46.683 05:07:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:46.683 05:07:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2437411 /var/tmp/bdevperf.sock 00:31:46.683 05:07:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2437411 ']' 00:31:46.683 05:07:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:46.683 05:07:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:46.683 05:07:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:46.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:46.683 05:07:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:46.683 05:07:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:48.058 05:07:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:48.058 05:07:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:48.058 05:07:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:48.058 NVMe0n1 00:31:48.058 05:07:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:48.622 00:31:48.622 05:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2437669 00:31:48.622 05:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:48.622 05:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:49.558 05:07:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.817 [2024-10-28 05:07:40.349257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.817 [2024-10-28 05:07:40.349325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.817 [2024-10-28 05:07:40.349340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.817 [2024-10-28 05:07:40.349353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.817 [2024-10-28 05:07:40.349365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.817 [2024-10-28 05:07:40.349377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.817 [2024-10-28 05:07:40.349390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.817 [2024-10-28 05:07:40.349402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.817 [2024-10-28 05:07:40.349413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.817 [2024-10-28 05:07:40.349425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.817 [2024-10-28 05:07:40.349436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.817 [2024-10-28 05:07:40.349448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.817 [2024-10-28 05:07:40.349460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.349994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.350005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.350017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.350029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.350040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.350052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.350064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.350076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 [2024-10-28 05:07:40.350088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:31:49.818 05:07:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:53.100 05:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:53.357 00:31:53.357 05:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:53.615 [2024-10-28 05:07:43.982511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3be70 is same with the state(6) to be set 00:31:53.615 [2024-10-28 05:07:43.982568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3be70 is same with the state(6) to be set 00:31:53.615 [2024-10-28 05:07:43.982583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3be70 is same with the state(6) to be set 00:31:53.615 [2024-10-28 05:07:43.982612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3be70 is same with the state(6) to be set 00:31:53.615 [2024-10-28 05:07:43.982625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3be70 is same with the state(6) to be set 00:31:53.615 [2024-10-28 05:07:43.982648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3be70 is same with the state(6) to be set 00:31:53.615 [2024-10-28 05:07:43.982662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3be70 is same with the state(6) to be set 00:31:53.615 [2024-10-28 05:07:43.982674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3be70 is same with the state(6) to be set 00:31:53.615 [2024-10-28 05:07:43.982685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3be70 is same with the state(6) to be set 00:31:53.615 05:07:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:56.891 05:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.891 [2024-10-28 05:07:47.261248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.891 05:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:57.824 05:07:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:58.082 05:07:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2437669 00:32:04.711 { 00:32:04.711 "results": [ 00:32:04.711 { 00:32:04.711 "job": "NVMe0n1", 00:32:04.711 "core_mask": "0x1", 00:32:04.711 "workload": "verify", 00:32:04.711 "status": "finished", 00:32:04.711 "verify_range": { 00:32:04.711 "start": 0, 00:32:04.711 "length": 16384 00:32:04.711 }, 00:32:04.711 "queue_depth": 128, 00:32:04.711 "io_size": 4096, 00:32:04.711 "runtime": 15.007461, 00:32:04.711 "iops": 8257.226189026911, 00:32:04.711 "mibps": 32.25478980088637, 00:32:04.711 "io_failed": 12285, 00:32:04.711 "io_timeout": 0, 00:32:04.711 "avg_latency_us": 14076.428346735753, 00:32:04.711 "min_latency_us": 550.4926675329497, 00:32:04.711 "max_latency_us": 27056.258399851493 00:32:04.711 } 00:32:04.711 ], 00:32:04.711 "core_count": 1 00:32:04.711 } 00:32:04.711 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2437411 00:32:04.711 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2437411 ']' 00:32:04.711 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2437411 00:32:04.711 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:04.711 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:04.711 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2437411 00:32:04.711 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:04.711 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:04.711 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2437411' 00:32:04.711 killing process with pid 2437411 00:32:04.711 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2437411 00:32:04.711 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2437411 00:32:04.711 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:04.711 [2024-10-28 05:07:37.229041] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:32:04.711 [2024-10-28 05:07:37.229131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2437411 ] 00:32:04.711 [2024-10-28 05:07:37.361843] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:04.711 [2024-10-28 05:07:37.399138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.711 [2024-10-28 05:07:37.445259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.711 Running I/O for 15 seconds... 00:32:04.711 8354.00 IOPS, 32.63 MiB/s [2024-10-28T04:07:55.307Z] [2024-10-28 05:07:40.352151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.711 [2024-10-28 05:07:40.352210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.711 [2024-10-28 05:07:40.352270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.711 [2024-10-28 05:07:40.352299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.711 [2024-10-28 05:07:40.352329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.711 [2024-10-28 05:07:40.352356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.711 [2024-10-28 05:07:40.352385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.711 [2024-10-28 05:07:40.352414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.711 [2024-10-28 05:07:40.352441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.711 [2024-10-28 05:07:40.352470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.711 [2024-10-28 05:07:40.352498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.711 [2024-10-28 05:07:40.352534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.711 [2024-10-28 05:07:40.352563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.711 [2024-10-28 05:07:40.352591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.711 [2024-10-28 05:07:40.352641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.711 [2024-10-28 05:07:40.352675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.711 [2024-10-28 05:07:40.352703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.711 [2024-10-28 05:07:40.352732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.711 [2024-10-28 05:07:40.352760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.711 [2024-10-28 05:07:40.352788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.711 [2024-10-28 05:07:40.352816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.711 [2024-10-28 05:07:40.352845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.711 [2024-10-28 05:07:40.352873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.711 [2024-10-28 05:07:40.352888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.352901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.352920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.352950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.352966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.352980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.352995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.353978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.353993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.354006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.354021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.354034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.354053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.354067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.354082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.354096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.712 [2024-10-28 05:07:40.354111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.712 [2024-10-28 05:07:40.354124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.713 [2024-10-28 05:07:40.354440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.713 [2024-10-28 05:07:40.354468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.354972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.354987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.355001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.355015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.713 [2024-10-28 05:07:40.355029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.355059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.713 [2024-10-28 05:07:40.355075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77760 len:8 PRP1 0x0 PRP2 0x0 00:32:04.713 [2024-10-28 05:07:40.355089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.355147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.713 [2024-10-28 05:07:40.355169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.355183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.713 [2024-10-28 05:07:40.355196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.355210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.713 [2024-10-28 05:07:40.355228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.355242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.713 [2024-10-28 05:07:40.355255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.355267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5a4d0 is same with the state(6) to be set 00:32:04.713 [2024-10-28 05:07:40.355476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.713 [2024-10-28 05:07:40.355496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.713 [2024-10-28 05:07:40.355509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77768 len:8 PRP1 0x0 PRP2 0x0 00:32:04.713 [2024-10-28 05:07:40.355522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.355538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.713 [2024-10-28 05:07:40.355549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.713 [2024-10-28 05:07:40.355561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77776 len:8 PRP1 0x0 PRP2 0x0 00:32:04.713 [2024-10-28 05:07:40.355574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.713 [2024-10-28 05:07:40.355587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.713 [2024-10-28 05:07:40.355598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.713 [2024-10-28 05:07:40.355608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77784 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.355621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.355643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.355657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.355668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77792 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.355681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.355694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.355705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.355716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77800 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.355728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.355741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.355752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.355763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77808 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.355775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.355788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.355798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.355813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77816 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.355827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.355840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.355850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.355861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77824 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.355873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.355886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.355897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.355908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77832 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.355921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.355934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.355944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.355955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77840 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.355968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.355981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.355991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77848 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.356039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77856 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.356086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77864 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.356132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77872 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.356182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77880 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.356230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77888 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.356277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77896 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.356324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77904 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.356370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77912 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.356416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77920 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.356463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77928 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.356512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77936 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.356565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77944 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.356613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77952 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.356678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77960 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.356726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77968 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.714 [2024-10-28 05:07:40.356781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.714 [2024-10-28 05:07:40.356793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77976 len:8 PRP1 0x0 PRP2 0x0 00:32:04.714 [2024-10-28 05:07:40.356806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.714 [2024-10-28 05:07:40.356819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.356830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.356841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77984 len:8 PRP1 0x0 PRP2 0x0 00:32:04.715 [2024-10-28 05:07:40.356855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.715 [2024-10-28 05:07:40.356868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.356879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.356890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77992 len:8 PRP1 0x0 PRP2 0x0 00:32:04.715 [2024-10-28 05:07:40.356902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.715 [2024-10-28 05:07:40.356916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.356927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.356938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78000 len:8 PRP1 0x0 PRP2 0x0 00:32:04.715 [2024-10-28 05:07:40.356955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.715 [2024-10-28 05:07:40.356969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.356980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.356992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78008 len:8 PRP1 0x0 PRP2 0x0 00:32:04.715 [2024-10-28 05:07:40.357004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.715 [2024-10-28 05:07:40.357018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.357028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.357040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78016 len:8 PRP1 0x0 PRP2 0x0 00:32:04.715 [2024-10-28 05:07:40.357059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.715 [2024-10-28 05:07:40.357073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.357083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.357095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77000 len:8 PRP1 0x0 PRP2 0x0 00:32:04.715 [2024-10-28 05:07:40.357108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.715 [2024-10-28 05:07:40.357121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.357132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.357144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77008 len:8 PRP1 0x0 PRP2 0x0 00:32:04.715 [2024-10-28 05:07:40.357162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.715 [2024-10-28 05:07:40.357175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.357187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.357199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77016 len:8 PRP1 0x0 PRP2 0x0 00:32:04.715 [2024-10-28 05:07:40.357212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.715 [2024-10-28 05:07:40.357225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.357236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.357248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77024 len:8 PRP1 0x0 PRP2 0x0 00:32:04.715 [2024-10-28 05:07:40.357261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.715 [2024-10-28 05:07:40.357274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.357285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.357296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77032 len:8 PRP1 0x0 PRP2 0x0 00:32:04.715 [2024-10-28 05:07:40.357309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.715 [2024-10-28 05:07:40.357322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.357334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.357348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77040 len:8 PRP1 0x0 PRP2 0x0 00:32:04.715 [2024-10-28 05:07:40.357362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.715 [2024-10-28 05:07:40.357375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.357386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.357398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77048 len:8 PRP1 0x0 PRP2 0x0 00:32:04.715 [2024-10-28 05:07:40.357411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.715 [2024-10-28 05:07:40.357424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.357435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.357447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77056 len:8 PRP1 0x0 PRP2 0x0 00:32:04.715 [2024-10-28 05:07:40.357465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.715 [2024-10-28 05:07:40.357479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.357490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.357502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77064 len:8 PRP1 0x0 PRP2 0x0 00:32:04.715 [2024-10-28 05:07:40.357515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.715 [2024-10-28 05:07:40.357528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.357539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.357550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77072 len:8 PRP1 0x0 PRP2 0x0 00:32:04.715 [2024-10-28 05:07:40.357564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.715 [2024-10-28 05:07:40.357577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.357588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.357599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77080 len:8 PRP1 0x0 PRP2 0x0 00:32:04.715 [2024-10-28 05:07:40.357612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.715 [2024-10-28 05:07:40.357625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.715 [2024-10-28 05:07:40.357644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.715 [2024-10-28 05:07:40.357656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77088 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.357670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.357683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.357695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.357706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77096 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.357719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.357737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.357748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.357760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77104 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.357772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.357786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.357797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.357808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77128 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.357821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.357834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.357845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.357856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77136 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.357869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.357881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.357892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.357903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77144 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.357916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.357929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.357940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.357951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77152 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.357964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.357977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.357987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.357998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77160 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77168 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77176 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77184 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77192 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77200 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77208 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77216 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77224 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77232 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77240 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77248 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77256 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77264 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77272 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77280 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77288 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.716 [2024-10-28 05:07:40.358829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77296 len:8 PRP1 0x0 PRP2 0x0 00:32:04.716 [2024-10-28 05:07:40.358841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.716 [2024-10-28 05:07:40.358860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.716 [2024-10-28 05:07:40.358875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.358887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77304 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.358899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.358912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.358923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.358935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77312 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.358948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.358961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.358972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.358983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77320 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.358995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.359008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.359019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.359030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77328 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.359042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.359056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.359067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.359077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77336 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.359090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.359102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.359113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.359124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77344 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.359144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.359157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.359168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.359179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77352 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.359192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.359205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.359216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.359227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77360 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.359240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.359264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.359277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.359288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77368 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.359301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.359314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.359325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.359336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77376 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.359348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.359361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.359372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.359383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77384 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.359397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.359410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.359420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.359432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77392 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.359445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.359458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.359469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.359480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77400 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.359493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.359506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.359517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.359529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77408 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.365412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.365446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.365459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.365471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77416 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.365485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.365498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.365509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.365520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77424 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.365539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.365554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.365565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.365575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77432 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.365588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.365600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.365610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.365621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77440 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.365649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.365666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.365677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.365688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77448 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.365701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.365714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.365724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.365735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77456 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.365747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.365759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.365770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.365781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77464 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.365793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.365805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.365816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.365827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77472 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.365839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.365851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.365862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.365873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77480 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.365885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.365897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.365908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.717 [2024-10-28 05:07:40.365922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77488 len:8 PRP1 0x0 PRP2 0x0 00:32:04.717 [2024-10-28 05:07:40.365935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.717 [2024-10-28 05:07:40.365948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.717 [2024-10-28 05:07:40.365959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.365969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77496 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.365982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.365994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77504 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77512 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77520 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77528 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77536 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77544 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77552 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77560 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77568 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77576 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77584 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77592 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77600 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77112 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77120 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77608 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77616 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77624 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77632 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77640 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77648 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.366961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.366974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.366984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.366998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77656 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.367011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.367024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.367034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.367045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77664 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.367057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.718 [2024-10-28 05:07:40.367070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.718 [2024-10-28 05:07:40.367080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.718 [2024-10-28 05:07:40.367091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77672 len:8 PRP1 0x0 PRP2 0x0 00:32:04.718 [2024-10-28 05:07:40.367103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:40.367116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.719 [2024-10-28 05:07:40.367126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.719 [2024-10-28 05:07:40.367137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77680 len:8 PRP1 0x0 PRP2 0x0 00:32:04.719 [2024-10-28 05:07:40.367149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:40.367161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.719 [2024-10-28 05:07:40.367172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.719 [2024-10-28 05:07:40.367183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77688 len:8 PRP1 0x0 PRP2 0x0 00:32:04.719 [2024-10-28 05:07:40.367195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:40.367207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.719 [2024-10-28 05:07:40.367218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.719 [2024-10-28 05:07:40.367229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77696 len:8 PRP1 0x0 PRP2 0x0 00:32:04.719 [2024-10-28 05:07:40.367241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:40.367253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.719 [2024-10-28 05:07:40.367263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.719 [2024-10-28 05:07:40.367274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77704 len:8 PRP1 0x0 PRP2 0x0 00:32:04.719 [2024-10-28 05:07:40.367286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:40.367299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.719 [2024-10-28 05:07:40.367309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.719 [2024-10-28 05:07:40.367320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77712 len:8 PRP1 0x0 PRP2 0x0 00:32:04.719 [2024-10-28 05:07:40.367338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:40.367353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.719 [2024-10-28 05:07:40.367367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.719 [2024-10-28 05:07:40.367378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77720 len:8 PRP1 0x0 PRP2 0x0 00:32:04.719 [2024-10-28 05:07:40.367391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:40.367404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.719 [2024-10-28 05:07:40.367415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.719 [2024-10-28 05:07:40.367426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77728 len:8 PRP1 0x0 PRP2 0x0 00:32:04.719 [2024-10-28 05:07:40.367438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:40.367451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.719 [2024-10-28 05:07:40.367461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.719 [2024-10-28 05:07:40.367472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77736 len:8 PRP1 0x0 PRP2 0x0 00:32:04.719 [2024-10-28 05:07:40.367485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:40.367497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.719 [2024-10-28 05:07:40.367508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.719 [2024-10-28 05:07:40.367519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77744 len:8 PRP1 0x0 PRP2 0x0 00:32:04.719 [2024-10-28 05:07:40.367531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:40.367544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.719 [2024-10-28 05:07:40.367555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.719 [2024-10-28 05:07:40.367565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77752 len:8 PRP1 0x0 PRP2 0x0 00:32:04.719 [2024-10-28 05:07:40.367578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:40.367590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.719 [2024-10-28 05:07:40.367602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.719 [2024-10-28 05:07:40.367612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77760 len:8 PRP1 0x0 PRP2 0x0 00:32:04.719 [2024-10-28 05:07:40.367625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:40.367699] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:04.719 [2024-10-28 05:07:40.367720] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:04.719 [2024-10-28 05:07:40.367789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5a4d0 (9): Bad file descriptor 00:32:04.719 [2024-10-28 05:07:40.371010] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:04.719 [2024-10-28 05:07:40.493344] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:04.719 7793.00 IOPS, 30.44 MiB/s [2024-10-28T04:07:55.315Z] 8090.00 IOPS, 31.60 MiB/s [2024-10-28T04:07:55.315Z] 8058.75 IOPS, 31.48 MiB/s [2024-10-28T04:07:55.315Z] [2024-10-28 05:07:43.983224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.719 [2024-10-28 05:07:43.983272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:43.983298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.719 [2024-10-28 05:07:43.983314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:43.983330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.719 [2024-10-28 05:07:43.983343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:43.983359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.719 [2024-10-28 05:07:43.983372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:43.983387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.719 [2024-10-28 05:07:43.983400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:43.983415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.719 [2024-10-28 05:07:43.983428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:43.983443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.719 [2024-10-28 05:07:43.983456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:43.983470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.719 [2024-10-28 05:07:43.983483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:43.983498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.719 [2024-10-28 05:07:43.983511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:43.983526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.719 [2024-10-28 05:07:43.983539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:43.983554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.719 [2024-10-28 05:07:43.983568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:43.983582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.719 [2024-10-28 05:07:43.983596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:43.983610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.719 [2024-10-28 05:07:43.983648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:43.983666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.719 [2024-10-28 05:07:43.983684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:43.983700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.719 [2024-10-28 05:07:43.983714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:43.983728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.719 [2024-10-28 05:07:43.983742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.719 [2024-10-28 05:07:43.983757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.983770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.983785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.983798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.983813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.983827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.983842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.983855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.983870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.983884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.983898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.983912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.983926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.983940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.983956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.983985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.720 [2024-10-28 05:07:43.984494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.720 [2024-10-28 05:07:43.984522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.720 [2024-10-28 05:07:43.984550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.720 [2024-10-28 05:07:43.984578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.720 [2024-10-28 05:07:43.984605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.720 [2024-10-28 05:07:43.984658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.720 [2024-10-28 05:07:43.984687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.720 [2024-10-28 05:07:43.984715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.720 [2024-10-28 05:07:43.984743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.720 [2024-10-28 05:07:43.984771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.720 [2024-10-28 05:07:43.984804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.720 [2024-10-28 05:07:43.984833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.720 [2024-10-28 05:07:43.984860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.720 [2024-10-28 05:07:43.984890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.720 [2024-10-28 05:07:43.984919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.720 [2024-10-28 05:07:43.984933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.984961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.984976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.984990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.985977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.985991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.986005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.986020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.986033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.986047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.986061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.986076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.986090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.986104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.986123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.986139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.986152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.986167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.721 [2024-10-28 05:07:43.986181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.721 [2024-10-28 05:07:43.986196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.722 [2024-10-28 05:07:43.986864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.722 [2024-10-28 05:07:43.986891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.722 [2024-10-28 05:07:43.986919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.986950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.722 [2024-10-28 05:07:43.986967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86304 len:8 PRP1 0x0 PRP2 0x0 00:32:04.722 [2024-10-28 05:07:43.986981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.987242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.722 [2024-10-28 05:07:43.987262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.722 [2024-10-28 05:07:43.987274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86312 len:8 PRP1 0x0 PRP2 0x0 00:32:04.722 [2024-10-28 05:07:43.987287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.987303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.722 [2024-10-28 05:07:43.987315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.722 [2024-10-28 05:07:43.987331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86320 len:8 PRP1 0x0 PRP2 0x0 00:32:04.722 [2024-10-28 05:07:43.987344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.987358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.722 [2024-10-28 05:07:43.987368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.722 [2024-10-28 05:07:43.987379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86328 len:8 PRP1 0x0 PRP2 0x0 00:32:04.722 [2024-10-28 05:07:43.987391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.987404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.722 [2024-10-28 05:07:43.987414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.722 [2024-10-28 05:07:43.987425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86336 len:8 PRP1 0x0 PRP2 0x0 00:32:04.722 [2024-10-28 05:07:43.987438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.987450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.722 [2024-10-28 05:07:43.987461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.722 [2024-10-28 05:07:43.987472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85952 len:8 PRP1 0x0 PRP2 0x0 00:32:04.722 [2024-10-28 05:07:43.987484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.987496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.722 [2024-10-28 05:07:43.987508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.722 [2024-10-28 05:07:43.987518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85960 len:8 PRP1 0x0 PRP2 0x0 00:32:04.722 [2024-10-28 05:07:43.987531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.987544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.722 [2024-10-28 05:07:43.987555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.722 [2024-10-28 05:07:43.987567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85968 len:8 PRP1 0x0 PRP2 0x0 00:32:04.722 [2024-10-28 05:07:43.987579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.987592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.722 [2024-10-28 05:07:43.987603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.722 [2024-10-28 05:07:43.987614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85976 len:8 PRP1 0x0 PRP2 0x0 00:32:04.722 [2024-10-28 05:07:43.987626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.987648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.722 [2024-10-28 05:07:43.987659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.722 [2024-10-28 05:07:43.987671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85984 len:8 PRP1 0x0 PRP2 0x0 00:32:04.722 [2024-10-28 05:07:43.987683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.722 [2024-10-28 05:07:43.987697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.987711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.987723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85992 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.987735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.987748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.987759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.987770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86000 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.987783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.987795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.987806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.987817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86008 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.987830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.987843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.987853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.987864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86016 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.987876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.987889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.987900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.987911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86024 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.987923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.987935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.987946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.987957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86032 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.987969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.987982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.987993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86040 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86048 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86056 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86064 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86072 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86080 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86088 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86096 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86104 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86112 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86120 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86128 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86136 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86144 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86152 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86160 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86168 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86176 len:8 PRP1 0x0 PRP2 0x0 00:32:04.723 [2024-10-28 05:07:43.988854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.723 [2024-10-28 05:07:43.988867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.723 [2024-10-28 05:07:43.988877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.723 [2024-10-28 05:07:43.988889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86184 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.988901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.988914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.988925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.988936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86192 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.988948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.988961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.988972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.988983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86200 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.988995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86208 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86216 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86224 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86232 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86240 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86248 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86256 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86264 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86272 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86280 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86344 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86352 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86360 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86368 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86376 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86384 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86392 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86400 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.724 [2024-10-28 05:07:43.989924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86408 len:8 PRP1 0x0 PRP2 0x0 00:32:04.724 [2024-10-28 05:07:43.989936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.724 [2024-10-28 05:07:43.989949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.724 [2024-10-28 05:07:43.989959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.989970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86416 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.989990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.990004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.990015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.990026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86424 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.990038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.990051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.990062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.990073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86432 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.990085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.990098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.990109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.990119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86440 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.990132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.990145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.990155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.990166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86448 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.990179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.990192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.990202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.990213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86456 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.990226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.990238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.990250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.990261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86464 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.990273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.990285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.990296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.990307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86472 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.990319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.990332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.990342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.990356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86480 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.990369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.990382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.990392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.990403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86488 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.990415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.990427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.990438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.990449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86496 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.990461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.990473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.990483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.990494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86504 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.990507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.990519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.990529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.990540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86512 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.990552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.990565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.990575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.990586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86520 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.990598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.990611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.990621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.990632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86528 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.990651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.990664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.990675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.990686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86536 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.996596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.996643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.996659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.996671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86544 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.996684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.996697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.996708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.996719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86552 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.996731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.996744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.996754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.996765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86560 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.996777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.996790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.996800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.996811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86568 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.996823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.996836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.996846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.996857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86576 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.996869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.996881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.996891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.996902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86584 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.996914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.996927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.996937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.996948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86592 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.996961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.996973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.996983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.996994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86600 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.997010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.997023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.725 [2024-10-28 05:07:43.997033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.725 [2024-10-28 05:07:43.997044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86608 len:8 PRP1 0x0 PRP2 0x0 00:32:04.725 [2024-10-28 05:07:43.997056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.725 [2024-10-28 05:07:43.997068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86616 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86624 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86632 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86640 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86648 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86656 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86664 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86672 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86680 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86688 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86696 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86704 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86712 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86720 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86728 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86736 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86744 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86752 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86760 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.997955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.997966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.997976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86768 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.997989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.998001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.998011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.998022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86776 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.998034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.998046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.998057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.998067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86784 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.998079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.998095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.998106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.998117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86792 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.998129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.998142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.998152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.998163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86800 len:8 PRP1 0x0 PRP2 0x0 00:32:04.726 [2024-10-28 05:07:43.998175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.726 [2024-10-28 05:07:43.998187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.726 [2024-10-28 05:07:43.998198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.726 [2024-10-28 05:07:43.998208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86808 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.998254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86816 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.998299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86824 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.998345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86832 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.998391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86840 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.998436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86848 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.998485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86856 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.998531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86864 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.998577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86872 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.998623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86880 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.998677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86888 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.998723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86896 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.998769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86904 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.998818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86912 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.998864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86920 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.998910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86928 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.998956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86936 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.998968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.998980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.998991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.999002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86944 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.999013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.999026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.999037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.999047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86952 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.999066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.999080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.999090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.999101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86960 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.999113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.999126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.999137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.999148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86968 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.999160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.999176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.999188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.999199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86288 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.999211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.999224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.999235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.999246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86296 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.999258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.999271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.727 [2024-10-28 05:07:43.999281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.727 [2024-10-28 05:07:43.999292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86304 len:8 PRP1 0x0 PRP2 0x0 00:32:04.727 [2024-10-28 05:07:43.999304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.999371] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:04.727 [2024-10-28 05:07:43.999414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.727 [2024-10-28 05:07:43.999433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.727 [2024-10-28 05:07:43.999448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.728 [2024-10-28 05:07:43.999461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:43.999475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.728 [2024-10-28 05:07:43.999487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:43.999500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.728 [2024-10-28 05:07:43.999513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:43.999526] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:04.728 [2024-10-28 05:07:43.999566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5a4d0 (9): Bad file descriptor 00:32:04.728 [2024-10-28 05:07:44.002839] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:04.728 [2024-10-28 05:07:44.120185] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:04.728 7892.40 IOPS, 30.83 MiB/s [2024-10-28T04:07:55.324Z] 8001.33 IOPS, 31.26 MiB/s [2024-10-28T04:07:55.324Z] 8091.00 IOPS, 31.61 MiB/s [2024-10-28T04:07:55.324Z] 8146.25 IOPS, 31.82 MiB/s [2024-10-28T04:07:55.324Z] 8185.89 IOPS, 31.98 MiB/s [2024-10-28T04:07:55.324Z] [2024-10-28 05:07:48.575485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.575547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.575584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.575600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.575616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.575630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.575656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.575670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.575685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.575699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.575715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.575730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.575745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.575758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.575773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.575786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.575801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.575815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.575830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.575843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.575858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.575872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.575887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.575900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.575915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.575928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.575943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.575956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.575980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.575994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.576008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.576022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.576037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.576051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.576066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.576080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.576096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.576110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.576125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.576138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.576153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.576167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.576182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.576196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.576211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.576224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.576238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.576252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.576268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.576281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.576296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.576309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.576324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.576342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.576358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.576372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.576387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.576400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.576415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.576429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.728 [2024-10-28 05:07:48.576444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.728 [2024-10-28 05:07:48.576458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.729 [2024-10-28 05:07:48.576486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.729 [2024-10-28 05:07:48.576516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.729 [2024-10-28 05:07:48.576545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.729 [2024-10-28 05:07:48.576575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.729 [2024-10-28 05:07:48.576604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.729 [2024-10-28 05:07:48.576642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.729 [2024-10-28 05:07:48.576674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.729 [2024-10-28 05:07:48.576704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.729 [2024-10-28 05:07:48.576737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.729 [2024-10-28 05:07:48.576766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.729 [2024-10-28 05:07:48.576794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.729 [2024-10-28 05:07:48.576823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.729 [2024-10-28 05:07:48.576852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.729 [2024-10-28 05:07:48.576881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.576913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.576942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.576971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.576985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.576999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.729 [2024-10-28 05:07:48.577347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.729 [2024-10-28 05:07:48.577377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.729 [2024-10-28 05:07:48.577693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.729 [2024-10-28 05:07:48.577706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.577721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.577734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.577749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.577763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.577778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.577791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.577807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.577820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.577840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.577854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.577869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.577883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.577898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.577912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.577927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.577940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.577955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.577969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.577985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.577999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.730 [2024-10-28 05:07:48.578906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.730 [2024-10-28 05:07:48.578921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.731 [2024-10-28 05:07:48.578938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.578954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.731 [2024-10-28 05:07:48.578968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.578983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.731 [2024-10-28 05:07:48.578997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.579011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.731 [2024-10-28 05:07:48.579025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.579040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.731 [2024-10-28 05:07:48.579054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.579070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.731 [2024-10-28 05:07:48.579083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.579098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.731 [2024-10-28 05:07:48.579112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.579126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.731 [2024-10-28 05:07:48.579140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.579155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.731 [2024-10-28 05:07:48.579169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.579183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.731 [2024-10-28 05:07:48.579197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.579213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.731 [2024-10-28 05:07:48.579226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.579241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.731 [2024-10-28 05:07:48.579255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.579270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.731 [2024-10-28 05:07:48.579284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.579298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7d3c0 is same with the state(6) to be set 00:32:04.731 [2024-10-28 05:07:48.579318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:04.731 [2024-10-28 05:07:48.579330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:04.731 [2024-10-28 05:07:48.579341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34944 len:8 PRP1 0x0 PRP2 0x0 00:32:04.731 [2024-10-28 05:07:48.579354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.579419] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:04.731 [2024-10-28 05:07:48.579456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.731 [2024-10-28 05:07:48.579475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.579490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.731 [2024-10-28 05:07:48.579504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.579525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.731 [2024-10-28 05:07:48.579549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.579574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.731 [2024-10-28 05:07:48.579591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.731 [2024-10-28 05:07:48.579605] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:04.731 [2024-10-28 05:07:48.582847] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:04.731 [2024-10-28 05:07:48.582886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5a4d0 (9): Bad file descriptor 00:32:04.731 [2024-10-28 05:07:48.659261] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:04.731 8128.60 IOPS, 31.75 MiB/s [2024-10-28T04:07:55.327Z] 8170.82 IOPS, 31.92 MiB/s [2024-10-28T04:07:55.327Z] 8195.08 IOPS, 32.01 MiB/s [2024-10-28T04:07:55.327Z] 8218.46 IOPS, 32.10 MiB/s [2024-10-28T04:07:55.327Z] 8241.50 IOPS, 32.19 MiB/s 00:32:04.731 Latency(us) 00:32:04.731 [2024-10-28T04:07:55.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.731 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:04.731 Verification LBA range: start 0x0 length 0x4000 00:32:04.731 NVMe0n1 : 15.01 8257.23 32.25 818.59 0.00 14076.43 550.49 27056.26 00:32:04.731 [2024-10-28T04:07:55.327Z] =================================================================================================================== 00:32:04.731 [2024-10-28T04:07:55.327Z] Total : 8257.23 32.25 818.59 0.00 14076.43 550.49 27056.26 00:32:04.731 Received shutdown signal, test time was about 15.000000 seconds 00:32:04.731 00:32:04.731 Latency(us) 00:32:04.731 [2024-10-28T04:07:55.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.731 [2024-10-28T04:07:55.327Z] =================================================================================================================== 00:32:04.731 [2024-10-28T04:07:55.327Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:04.731 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:04.731 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:04.731 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:04.731 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2439344 00:32:04.731 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:04.731 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2439344 /var/tmp/bdevperf.sock 00:32:04.731 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2439344 ']' 00:32:04.731 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:04.731 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:04.731 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:04.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:04.731 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:04.731 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:04.731 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:04.731 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:04.731 05:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:04.731 [2024-10-28 05:07:55.070773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:04.731 05:07:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:04.989 [2024-10-28 05:07:55.386963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:04.989 05:07:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:05.247 NVMe0n1 00:32:05.247 05:07:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:05.812 00:32:05.812 05:07:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:06.070 00:32:06.070 05:07:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:06.070 05:07:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:06.328 05:07:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:06.586 05:07:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:09.868 05:08:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:09.868 05:08:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:09.868 05:08:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2440112 00:32:09.868 05:08:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:09.868 05:08:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2440112 00:32:11.243 { 00:32:11.243 "results": [ 00:32:11.243 { 00:32:11.243 "job": "NVMe0n1", 00:32:11.243 "core_mask": "0x1", 00:32:11.243 "workload": "verify", 00:32:11.243 "status": "finished", 00:32:11.243 "verify_range": { 00:32:11.243 "start": 0, 00:32:11.243 "length": 16384 00:32:11.243 }, 00:32:11.243 "queue_depth": 128, 00:32:11.243 "io_size": 4096, 00:32:11.243 "runtime": 1.006144, 00:32:11.243 "iops": 8216.517715158068, 00:32:11.243 "mibps": 32.095772324836204, 00:32:11.243 "io_failed": 0, 00:32:11.243 "io_timeout": 0, 00:32:11.243 "avg_latency_us": 15514.305340493622, 00:32:11.243 "min_latency_us": 1338.2142194171154, 00:32:11.243 "max_latency_us": 15571.947280490069 00:32:11.243 } 00:32:11.243 ], 00:32:11.243 "core_count": 1 00:32:11.243 } 00:32:11.243 05:08:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:11.243 [2024-10-28 05:07:54.491519] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:32:11.243 [2024-10-28 05:07:54.491606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2439344 ] 00:32:11.243 [2024-10-28 05:07:54.622938] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:11.243 [2024-10-28 05:07:54.661891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.243 [2024-10-28 05:07:54.706232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.243 [2024-10-28 05:07:57.110471] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:11.243 [2024-10-28 05:07:57.110553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.243 [2024-10-28 05:07:57.110577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.243 [2024-10-28 05:07:57.110595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.243 [2024-10-28 05:07:57.110609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.243 [2024-10-28 05:07:57.110623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.243 [2024-10-28 05:07:57.110643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.243 [2024-10-28 05:07:57.110714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.243 [2024-10-28 05:07:57.110731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.243 [2024-10-28 05:07:57.110745] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:11.243 [2024-10-28 05:07:57.110793] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:11.243 [2024-10-28 05:07:57.110825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170e4d0 (9): Bad file descriptor 00:32:11.243 [2024-10-28 05:07:57.158045] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:11.243 Running I/O for 1 seconds... 00:32:11.243 8139.00 IOPS, 31.79 MiB/s 00:32:11.243 Latency(us) 00:32:11.243 [2024-10-28T04:08:01.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.243 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:11.243 Verification LBA range: start 0x0 length 0x4000 00:32:11.243 NVMe0n1 : 1.01 8216.52 32.10 0.00 0.00 15514.31 1338.21 15571.95 00:32:11.243 [2024-10-28T04:08:01.839Z] =================================================================================================================== 00:32:11.243 [2024-10-28T04:08:01.839Z] Total : 8216.52 32.10 0.00 0.00 15514.31 1338.21 15571.95 00:32:11.243 05:08:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:11.244 05:08:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:11.244 05:08:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:11.501 05:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:11.501 05:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:11.759 05:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:12.324 05:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:15.600 05:08:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:15.600 05:08:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:15.600 05:08:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2439344 00:32:15.600 05:08:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2439344 ']' 00:32:15.600 05:08:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2439344 00:32:15.600 05:08:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:15.600 05:08:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:15.600 05:08:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2439344 00:32:15.600 05:08:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:15.601 05:08:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:15.601 05:08:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2439344' 00:32:15.601 killing process with pid 2439344 00:32:15.601 05:08:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2439344 00:32:15.601 05:08:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2439344 00:32:15.601 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:15.601 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:15.858 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:15.858 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:15.858 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:15.858 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:15.858 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:15.858 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:15.858 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:15.858 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:15.858 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:15.858 rmmod nvme_tcp 00:32:15.858 rmmod nvme_fabrics 00:32:16.116 rmmod nvme_keyring 00:32:16.116 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:16.116 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:16.116 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:16.116 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 2437061 ']' 00:32:16.116 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 2437061 00:32:16.116 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2437061 ']' 00:32:16.116 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2437061 00:32:16.116 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:16.116 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:16.116 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2437061 00:32:16.116 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:16.116 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:16.116 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2437061' 00:32:16.116 killing process with pid 2437061 00:32:16.117 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2437061 00:32:16.117 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2437061 00:32:16.375 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:16.375 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:16.375 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:16.375 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:16.375 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:32:16.375 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:16.375 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:32:16.375 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:16.375 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:16.375 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.375 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.375 05:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.280 05:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:18.280 00:32:18.280 real 0m36.978s 00:32:18.280 user 2m10.326s 00:32:18.280 sys 0m5.973s 00:32:18.280 05:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:18.280 05:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:18.280 ************************************ 00:32:18.280 END TEST nvmf_failover 00:32:18.280 ************************************ 00:32:18.280 05:08:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:18.281 05:08:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:18.281 05:08:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:18.281 05:08:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.281 ************************************ 00:32:18.281 START TEST nvmf_host_discovery 00:32:18.281 ************************************ 00:32:18.281 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:18.281 * Looking for test storage... 00:32:18.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1689 -- # lcov --version 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:32:18.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.580 --rc genhtml_branch_coverage=1 00:32:18.580 --rc genhtml_function_coverage=1 00:32:18.580 --rc genhtml_legend=1 00:32:18.580 --rc geninfo_all_blocks=1 00:32:18.580 --rc geninfo_unexecuted_blocks=1 00:32:18.580 00:32:18.580 ' 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:32:18.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.580 --rc genhtml_branch_coverage=1 00:32:18.580 --rc genhtml_function_coverage=1 00:32:18.580 --rc genhtml_legend=1 00:32:18.580 --rc geninfo_all_blocks=1 00:32:18.580 --rc geninfo_unexecuted_blocks=1 00:32:18.580 00:32:18.580 ' 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:32:18.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.580 --rc genhtml_branch_coverage=1 00:32:18.580 --rc genhtml_function_coverage=1 00:32:18.580 --rc genhtml_legend=1 00:32:18.580 --rc geninfo_all_blocks=1 00:32:18.580 --rc geninfo_unexecuted_blocks=1 00:32:18.580 00:32:18.580 ' 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:32:18.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.580 --rc genhtml_branch_coverage=1 00:32:18.580 --rc genhtml_function_coverage=1 00:32:18.580 --rc genhtml_legend=1 00:32:18.580 --rc geninfo_all_blocks=1 00:32:18.580 --rc geninfo_unexecuted_blocks=1 00:32:18.580 00:32:18.580 ' 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.580 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:18.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:18.581 05:08:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:20.487 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:20.487 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:20.487 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.487 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:20.488 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:20.488 05:08:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:20.488 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:20.488 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:20.488 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:20.488 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:20.488 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:20.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:20.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:32:20.747 00:32:20.747 --- 10.0.0.2 ping statistics --- 00:32:20.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.747 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:20.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:20.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:32:20.747 00:32:20.747 --- 10.0.0.1 ping statistics --- 00:32:20.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.747 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=2442689 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 2442689 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2442689 ']' 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:20.747 05:08:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.747 [2024-10-28 05:08:11.177671] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:32:20.747 [2024-10-28 05:08:11.177750] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.747 [2024-10-28 05:08:11.315349] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:21.006 [2024-10-28 05:08:11.357808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.006 [2024-10-28 05:08:11.405039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:21.006 [2024-10-28 05:08:11.405123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:21.006 [2024-10-28 05:08:11.405140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:21.006 [2024-10-28 05:08:11.405154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:21.006 [2024-10-28 05:08:11.405166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:21.006 [2024-10-28 05:08:11.405861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:21.941 [2024-10-28 05:08:12.204261] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:21.941 [2024-10-28 05:08:12.212423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:21.941 null0 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:21.941 null1 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2442842 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2442842 /tmp/host.sock 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2442842 ']' 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:21.941 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:21.941 05:08:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:21.941 [2024-10-28 05:08:12.294703] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:32:21.941 [2024-10-28 05:08:12.294793] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2442842 ] 00:32:21.941 [2024-10-28 05:08:12.434510] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:21.941 [2024-10-28 05:08:12.471847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.941 [2024-10-28 05:08:12.522556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.875 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.876 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.134 [2024-10-28 05:08:13.636826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:23.134 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:23.135 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:23.135 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:23.135 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:23.135 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:23.135 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:23.135 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.135 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:32:23.393 05:08:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:23.959 [2024-10-28 05:08:14.409377] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:23.959 [2024-10-28 05:08:14.409421] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:23.959 [2024-10-28 05:08:14.409443] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:23.959 [2024-10-28 05:08:14.536537] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:24.218 [2024-10-28 05:08:14.717252] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:24.218 [2024-10-28 05:08:14.718266] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6321f0:1 started. 00:32:24.218 [2024-10-28 05:08:14.720217] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:24.218 [2024-10-28 05:08:14.720241] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:24.218 [2024-10-28 05:08:14.726692] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6321f0 was disconnected and freed. delete nvme_qpair. 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:24.477 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:24.478 [2024-10-28 05:08:14.980372] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x600a70:1 started. 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:24.478 [2024-10-28 05:08:14.986675] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x600a70 was disconnected and freed. delete nvme_qpair. 00:32:24.478 05:08:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.478 [2024-10-28 05:08:15.061622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:24.478 [2024-10-28 05:08:15.061997] bdev_nvme.c:7273:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:24.478 [2024-10-28 05:08:15.062027] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.478 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:24.479 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:24.479 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:24.479 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.479 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:24.479 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:24.479 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:24.479 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.479 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:24.479 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.479 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:24.479 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:24.737 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.738 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:24.738 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:24.738 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.738 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:24.738 05:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:24.738 [2024-10-28 05:08:15.189603] bdev_nvme.c:7215:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:24.738 [2024-10-28 05:08:15.251276] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:24.738 [2024-10-28 05:08:15.251333] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:24.738 [2024-10-28 05:08:15.251351] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:24.738 [2024-10-28 05:08:15.251361] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.673 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.934 [2024-10-28 05:08:16.278933] bdev_nvme.c:7273:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:25.934 [2024-10-28 05:08:16.278968] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:25.934 [2024-10-28 05:08:16.287539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.934 [2024-10-28 05:08:16.287572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.934 [2024-10-28 05:08:16.287590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.934 [2024-10-28 05:08:16.287605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.934 [2024-10-28 05:08:16.287619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.934 [2024-10-28 05:08:16.287632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.934 [2024-10-28 05:08:16.287656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.934 [2024-10-28 05:08:16.287669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.934 [2024-10-28 05:08:16.287683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x604440 is same with the state(6) to be set 00:32:25.934 [2024-10-28 05:08:16.297507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x604440 (9): Bad file descriptor 00:32:25.934 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.934 [2024-10-28 05:08:16.307523] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:25.934 [2024-10-28 05:08:16.307546] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:25.934 [2024-10-28 05:08:16.307556] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:25.934 [2024-10-28 05:08:16.307564] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:25.934 [2024-10-28 05:08:16.307608] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:25.934 [2024-10-28 05:08:16.307775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.934 [2024-10-28 05:08:16.307805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x604440 with addr=10.0.0.2, port=4420 00:32:25.934 [2024-10-28 05:08:16.307821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x604440 is same with the state(6) to be set 00:32:25.934 [2024-10-28 05:08:16.307844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x604440 (9): Bad file descriptor 00:32:25.934 [2024-10-28 05:08:16.307866] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:25.934 [2024-10-28 05:08:16.307880] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:25.934 [2024-10-28 05:08:16.307895] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:25.934 [2024-10-28 05:08:16.307907] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:25.934 [2024-10-28 05:08:16.307916] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:25.934 [2024-10-28 05:08:16.307949] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:25.934 [2024-10-28 05:08:16.317631] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:25.934 [2024-10-28 05:08:16.317660] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:25.934 [2024-10-28 05:08:16.317671] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:25.934 [2024-10-28 05:08:16.317679] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:25.934 [2024-10-28 05:08:16.317703] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:25.935 [2024-10-28 05:08:16.317853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.935 [2024-10-28 05:08:16.317880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x604440 with addr=10.0.0.2, port=4420 00:32:25.935 [2024-10-28 05:08:16.317896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x604440 is same with the state(6) to be set 00:32:25.935 [2024-10-28 05:08:16.317919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x604440 (9): Bad file descriptor 00:32:25.935 [2024-10-28 05:08:16.317940] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:25.935 [2024-10-28 05:08:16.317955] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:25.935 [2024-10-28 05:08:16.317968] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:25.935 [2024-10-28 05:08:16.317980] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:25.935 [2024-10-28 05:08:16.317988] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:25.935 [2024-10-28 05:08:16.318003] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:25.935 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.935 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:25.935 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:25.935 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:25.935 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:25.935 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:25.935 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:25.935 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:25.935 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.935 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:25.935 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.935 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.935 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:25.935 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:25.935 [2024-10-28 05:08:16.327715] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:25.935 [2024-10-28 05:08:16.327740] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:25.935 [2024-10-28 05:08:16.327756] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:25.935 [2024-10-28 05:08:16.327765] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:25.935 [2024-10-28 05:08:16.327791] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:25.935 [2024-10-28 05:08:16.327928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.935 [2024-10-28 05:08:16.327956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x604440 with addr=10.0.0.2, port=4420 00:32:25.935 [2024-10-28 05:08:16.327973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x604440 is same with the state(6) to be set 00:32:25.935 [2024-10-28 05:08:16.327995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x604440 (9): Bad file descriptor 00:32:25.935 [2024-10-28 05:08:16.328017] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:25.935 [2024-10-28 05:08:16.328032] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:25.935 [2024-10-28 05:08:16.328046] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:25.935 [2024-10-28 05:08:16.328058] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:25.935 [2024-10-28 05:08:16.328067] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:25.935 [2024-10-28 05:08:16.328082] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:25.935 [2024-10-28 05:08:16.337802] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:25.935 [2024-10-28 05:08:16.337827] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:25.935 [2024-10-28 05:08:16.337837] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:25.935 [2024-10-28 05:08:16.337845] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:25.935 [2024-10-28 05:08:16.337871] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:25.935 [2024-10-28 05:08:16.338052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.935 [2024-10-28 05:08:16.338098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x604440 with addr=10.0.0.2, port=4420 00:32:25.935 [2024-10-28 05:08:16.338116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x604440 is same with the state(6) to be set 00:32:25.935 [2024-10-28 05:08:16.338140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x604440 (9): Bad file descriptor 00:32:25.935 [2024-10-28 05:08:16.338164] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:25.935 [2024-10-28 05:08:16.338180] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:25.935 [2024-10-28 05:08:16.338194] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:25.935 [2024-10-28 05:08:16.338207] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:25.935 [2024-10-28 05:08:16.338217] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:25.935 [2024-10-28 05:08:16.338233] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:25.935 [2024-10-28 05:08:16.347883] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:25.935 [2024-10-28 05:08:16.347926] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:25.935 [2024-10-28 05:08:16.347938] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:25.935 [2024-10-28 05:08:16.347945] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:25.935 [2024-10-28 05:08:16.347970] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:25.935 [2024-10-28 05:08:16.348148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.935 [2024-10-28 05:08:16.348176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x604440 with addr=10.0.0.2, port=4420 00:32:25.935 [2024-10-28 05:08:16.348193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x604440 is same with the state(6) to be set 00:32:25.935 [2024-10-28 05:08:16.348215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x604440 (9): Bad file descriptor 00:32:25.935 [2024-10-28 05:08:16.348236] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:25.935 [2024-10-28 05:08:16.348250] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:25.935 [2024-10-28 05:08:16.348263] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:25.935 [2024-10-28 05:08:16.348275] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:25.935 [2024-10-28 05:08:16.348283] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:25.935 [2024-10-28 05:08:16.348298] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:25.935 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.935 [2024-10-28 05:08:16.357981] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:25.935 [2024-10-28 05:08:16.358003] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:25.935 [2024-10-28 05:08:16.358013] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:25.935 [2024-10-28 05:08:16.358020] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:25.935 [2024-10-28 05:08:16.358044] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:25.935 [2024-10-28 05:08:16.358181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.935 [2024-10-28 05:08:16.358208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x604440 with addr=10.0.0.2, port=4420 00:32:25.936 [2024-10-28 05:08:16.358223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x604440 is same with the state(6) to be set 00:32:25.936 [2024-10-28 05:08:16.358245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x604440 (9): Bad file descriptor 00:32:25.936 [2024-10-28 05:08:16.358266] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:25.936 [2024-10-28 05:08:16.358280] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:25.936 [2024-10-28 05:08:16.358292] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:25.936 [2024-10-28 05:08:16.358304] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:25.936 [2024-10-28 05:08:16.358312] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:25.936 [2024-10-28 05:08:16.358327] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:25.936 [2024-10-28 05:08:16.365677] bdev_nvme.c:7078:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:25.936 [2024-10-28 05:08:16.365723] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:25.936 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.195 05:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.128 [2024-10-28 05:08:17.660797] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:27.128 [2024-10-28 05:08:17.660835] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:27.128 [2024-10-28 05:08:17.660858] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:27.385 [2024-10-28 05:08:17.746887] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:27.385 [2024-10-28 05:08:17.811467] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:27.385 [2024-10-28 05:08:17.812301] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x63e080:1 started. 00:32:27.385 [2024-10-28 05:08:17.814757] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:27.385 [2024-10-28 05:08:17.814798] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:27.385 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.385 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:27.385 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:27.385 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:27.385 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:27.385 [2024-10-28 05:08:17.817624] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x63e080 was disconnected and freed. delete nvme_qpair. 00:32:27.385 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:27.385 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:27.385 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:27.385 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:27.385 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.385 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.385 request: 00:32:27.385 { 00:32:27.385 "name": "nvme", 00:32:27.385 "trtype": "tcp", 00:32:27.385 "traddr": "10.0.0.2", 00:32:27.385 "adrfam": "ipv4", 00:32:27.385 "trsvcid": "8009", 00:32:27.385 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:27.385 "wait_for_attach": true, 00:32:27.385 "method": "bdev_nvme_start_discovery", 00:32:27.385 "req_id": 1 00:32:27.385 } 00:32:27.385 Got JSON-RPC error response 00:32:27.385 response: 00:32:27.385 { 00:32:27.385 "code": -17, 00:32:27.385 "message": "File exists" 00:32:27.385 } 00:32:27.385 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:27.385 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:27.385 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:27.385 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:27.385 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.386 request: 00:32:27.386 { 00:32:27.386 "name": "nvme_second", 00:32:27.386 "trtype": "tcp", 00:32:27.386 "traddr": "10.0.0.2", 00:32:27.386 "adrfam": "ipv4", 00:32:27.386 "trsvcid": "8009", 00:32:27.386 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:27.386 "wait_for_attach": true, 00:32:27.386 "method": "bdev_nvme_start_discovery", 00:32:27.386 "req_id": 1 00:32:27.386 } 00:32:27.386 Got JSON-RPC error response 00:32:27.386 response: 00:32:27.386 { 00:32:27.386 "code": -17, 00:32:27.386 "message": "File exists" 00:32:27.386 } 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:27.386 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.645 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:27.645 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:27.645 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.645 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:27.645 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.645 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:27.645 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.645 05:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:27.645 05:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.645 05:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:27.645 05:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:27.645 05:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:27.645 05:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:27.645 05:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:27.645 05:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:27.645 05:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:27.645 05:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:27.645 05:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:27.645 05:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.645 05:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.581 [2024-10-28 05:08:19.035261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.581 [2024-10-28 05:08:19.035304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x768070 with addr=10.0.0.2, port=8010 00:32:28.581 [2024-10-28 05:08:19.035325] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:28.581 [2024-10-28 05:08:19.035339] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:28.581 [2024-10-28 05:08:19.035351] bdev_nvme.c:7359:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:29.515 [2024-10-28 05:08:20.035331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.515 [2024-10-28 05:08:20.035398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x768070 with addr=10.0.0.2, port=8010 00:32:29.515 [2024-10-28 05:08:20.035431] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:29.515 [2024-10-28 05:08:20.035447] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:29.515 [2024-10-28 05:08:20.035460] bdev_nvme.c:7359:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:30.450 [2024-10-28 05:08:21.035118] bdev_nvme.c:7334:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:30.450 request: 00:32:30.450 { 00:32:30.450 "name": "nvme_second", 00:32:30.450 "trtype": "tcp", 00:32:30.450 "traddr": "10.0.0.2", 00:32:30.450 "adrfam": "ipv4", 00:32:30.450 "trsvcid": "8010", 00:32:30.450 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:30.450 "wait_for_attach": false, 00:32:30.450 "attach_timeout_ms": 3000, 00:32:30.450 "method": "bdev_nvme_start_discovery", 00:32:30.450 "req_id": 1 00:32:30.450 } 00:32:30.450 Got JSON-RPC error response 00:32:30.450 response: 00:32:30.450 { 00:32:30.450 "code": -110, 00:32:30.450 "message": "Connection timed out" 00:32:30.450 } 00:32:30.450 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:30.450 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:30.450 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:30.450 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:30.450 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:30.450 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:30.450 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:30.450 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:30.450 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.450 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:30.450 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.450 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2442842 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:30.709 rmmod nvme_tcp 00:32:30.709 rmmod nvme_fabrics 00:32:30.709 rmmod nvme_keyring 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 2442689 ']' 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 2442689 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2442689 ']' 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2442689 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2442689 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2442689' 00:32:30.709 killing process with pid 2442689 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2442689 00:32:30.709 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2442689 00:32:30.968 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:30.968 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:30.968 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:30.968 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:30.968 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:32:30.968 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:30.968 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:32:30.968 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:30.968 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:30.968 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.968 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.968 05:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.948 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:32.948 00:32:32.948 real 0m14.629s 00:32:32.948 user 0m21.543s 00:32:32.948 sys 0m2.804s 00:32:32.949 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:32.949 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.949 ************************************ 00:32:32.949 END TEST nvmf_host_discovery 00:32:32.949 ************************************ 00:32:32.949 05:08:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:32.949 05:08:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:32.949 05:08:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:32.949 05:08:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.949 ************************************ 00:32:32.949 START TEST nvmf_host_multipath_status 00:32:32.949 ************************************ 00:32:32.949 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:33.207 * Looking for test storage... 00:32:33.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:33.207 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:32:33.207 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1689 -- # lcov --version 00:32:33.207 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:32:33.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.208 --rc genhtml_branch_coverage=1 00:32:33.208 --rc genhtml_function_coverage=1 00:32:33.208 --rc genhtml_legend=1 00:32:33.208 --rc geninfo_all_blocks=1 00:32:33.208 --rc geninfo_unexecuted_blocks=1 00:32:33.208 00:32:33.208 ' 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:32:33.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.208 --rc genhtml_branch_coverage=1 00:32:33.208 --rc genhtml_function_coverage=1 00:32:33.208 --rc genhtml_legend=1 00:32:33.208 --rc geninfo_all_blocks=1 00:32:33.208 --rc geninfo_unexecuted_blocks=1 00:32:33.208 00:32:33.208 ' 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:32:33.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.208 --rc genhtml_branch_coverage=1 00:32:33.208 --rc genhtml_function_coverage=1 00:32:33.208 --rc genhtml_legend=1 00:32:33.208 --rc geninfo_all_blocks=1 00:32:33.208 --rc geninfo_unexecuted_blocks=1 00:32:33.208 00:32:33.208 ' 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:32:33.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.208 --rc genhtml_branch_coverage=1 00:32:33.208 --rc genhtml_function_coverage=1 00:32:33.208 --rc genhtml_legend=1 00:32:33.208 --rc geninfo_all_blocks=1 00:32:33.208 --rc geninfo_unexecuted_blocks=1 00:32:33.208 00:32:33.208 ' 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:33.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:33.208 05:08:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:35.740 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:35.740 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:35.740 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:35.740 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:35.740 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:35.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:35.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:32:35.741 00:32:35.741 --- 10.0.0.2 ping statistics --- 00:32:35.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.741 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:35.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:35.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:32:35.741 00:32:35.741 --- 10.0.0.1 ping statistics --- 00:32:35.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.741 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=2445966 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 2445966 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2445966 ']' 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:35.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:35.741 05:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:35.741 [2024-10-28 05:08:26.019221] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:32:35.741 [2024-10-28 05:08:26.019312] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:35.741 [2024-10-28 05:08:26.158124] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:35.741 [2024-10-28 05:08:26.199177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:35.741 [2024-10-28 05:08:26.248222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:35.741 [2024-10-28 05:08:26.248287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:35.741 [2024-10-28 05:08:26.248304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:35.741 [2024-10-28 05:08:26.248317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:35.741 [2024-10-28 05:08:26.248329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:35.741 [2024-10-28 05:08:26.249850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.741 [2024-10-28 05:08:26.249857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.675 05:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:36.675 05:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:36.675 05:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:36.675 05:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:36.675 05:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:36.675 05:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:36.675 05:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2445966 00:32:36.675 05:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:36.933 [2024-10-28 05:08:27.363534] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:36.933 05:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:37.191 Malloc0 00:32:37.191 05:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:37.449 05:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:37.707 05:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:37.964 [2024-10-28 05:08:28.462002] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.964 05:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:38.223 [2024-10-28 05:08:28.718037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:38.223 05:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2446253 00:32:38.223 05:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:38.223 05:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:38.223 05:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2446253 /var/tmp/bdevperf.sock 00:32:38.223 05:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2446253 ']' 00:32:38.223 05:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:38.223 05:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:38.223 05:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:38.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:38.223 05:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:38.223 05:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:39.596 05:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:39.597 05:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:39.597 05:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:39.597 05:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:40.161 Nvme0n1 00:32:40.161 05:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:40.727 Nvme0n1 00:32:40.727 05:08:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:40.727 05:08:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:42.627 05:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:42.627 05:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:42.885 05:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:43.451 05:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:44.385 05:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:44.385 05:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:44.385 05:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.385 05:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:44.644 05:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.644 05:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:44.644 05:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.644 05:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:44.902 05:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:44.902 05:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:44.902 05:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.902 05:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:45.160 05:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.160 05:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:45.160 05:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.160 05:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:45.418 05:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.418 05:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:45.418 05:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.418 05:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:45.676 05:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.676 05:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:45.676 05:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.676 05:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:45.934 05:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.934 05:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:45.934 05:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:46.192 05:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:46.451 05:08:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:47.826 05:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:47.826 05:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:47.826 05:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.826 05:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:47.826 05:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:47.826 05:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:47.826 05:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.826 05:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:48.085 05:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.085 05:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:48.085 05:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.085 05:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:48.343 05:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.343 05:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:48.343 05:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.343 05:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:48.600 05:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.600 05:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:48.600 05:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.600 05:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:48.858 05:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.858 05:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:48.858 05:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.858 05:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:49.423 05:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.423 05:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:49.423 05:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:49.423 05:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:49.985 05:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:50.919 05:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:50.919 05:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:50.919 05:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.919 05:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:51.177 05:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.177 05:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:51.177 05:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.177 05:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:51.435 05:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:51.435 05:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:51.435 05:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.435 05:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:51.692 05:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.692 05:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:51.692 05:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.692 05:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:51.950 05:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.950 05:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:51.950 05:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.950 05:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:52.207 05:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.207 05:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:52.207 05:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.207 05:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:52.465 05:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.465 05:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:52.465 05:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:52.724 05:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:53.290 05:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:54.225 05:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:54.225 05:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:54.225 05:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.225 05:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:54.483 05:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.483 05:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:54.483 05:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.483 05:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:54.741 05:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:54.741 05:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:54.741 05:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.741 05:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:55.000 05:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.000 05:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:55.000 05:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.000 05:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:55.258 05:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.258 05:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:55.258 05:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.258 05:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:55.516 05:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.516 05:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:55.516 05:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.516 05:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:55.774 05:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:55.774 05:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:55.774 05:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:56.032 05:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:56.598 05:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:57.531 05:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:57.531 05:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:57.531 05:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.531 05:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:57.790 05:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:57.790 05:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:57.790 05:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.790 05:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:58.048 05:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:58.048 05:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:58.048 05:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.048 05:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:58.306 05:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.306 05:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:58.306 05:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.306 05:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:58.601 05:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.601 05:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:58.601 05:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.601 05:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:58.893 05:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:58.893 05:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:58.893 05:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.893 05:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:59.151 05:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:59.151 05:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:59.151 05:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:59.408 05:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:59.666 05:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:00.598 05:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:00.598 05:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:00.598 05:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.598 05:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:01.164 05:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:01.164 05:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:01.164 05:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.164 05:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:01.422 05:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.422 05:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:01.422 05:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.422 05:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:01.681 05:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.681 05:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:01.681 05:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.681 05:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:01.939 05:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.939 05:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:01.939 05:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.939 05:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:02.197 05:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:02.197 05:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:02.197 05:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.197 05:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:02.455 05:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.455 05:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:02.713 05:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:02.713 05:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:02.971 05:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:03.230 05:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:04.164 05:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:04.164 05:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:04.164 05:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.164 05:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:04.730 05:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.730 05:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:04.730 05:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.730 05:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:04.989 05:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.989 05:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:04.989 05:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.989 05:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:05.247 05:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.247 05:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:05.247 05:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.247 05:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:05.505 05:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.505 05:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:05.505 05:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.505 05:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:05.763 05:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.763 05:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:05.763 05:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.763 05:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:06.021 05:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.021 05:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:06.021 05:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:06.278 05:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:06.536 05:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:07.468 05:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:07.468 05:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:07.468 05:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.468 05:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:07.725 05:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.725 05:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:07.725 05:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.725 05:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:08.291 05:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.291 05:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:08.291 05:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.291 05:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:08.291 05:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.291 05:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:08.291 05:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.291 05:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:08.549 05:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.549 05:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:08.549 05:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.549 05:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:09.115 05:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.115 05:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:09.115 05:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.115 05:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:09.372 05:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.372 05:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:09.372 05:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:09.630 05:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:09.889 05:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:10.824 05:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:10.824 05:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:10.824 05:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.824 05:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:11.082 05:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.082 05:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:11.082 05:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.082 05:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:11.340 05:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.340 05:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:11.340 05:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.340 05:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:11.598 05:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.598 05:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:11.598 05:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.598 05:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:12.164 05:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.164 05:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:12.164 05:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.164 05:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:12.422 05:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.422 05:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:12.422 05:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.422 05:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:12.680 05:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.680 05:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:12.680 05:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:12.938 05:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:13.195 05:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:14.129 05:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:14.129 05:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:14.129 05:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.129 05:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:14.388 05:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.388 05:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:14.388 05:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.388 05:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:14.646 05:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:14.646 05:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:14.646 05:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.646 05:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:14.905 05:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.905 05:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:14.905 05:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.905 05:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:15.163 05:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.163 05:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:15.163 05:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.163 05:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:15.421 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.421 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:15.680 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.680 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:15.938 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:15.938 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2446253 00:33:15.938 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2446253 ']' 00:33:15.938 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2446253 00:33:15.938 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:15.938 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:15.938 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2446253 00:33:15.938 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:33:15.938 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:33:15.938 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2446253' 00:33:15.938 killing process with pid 2446253 00:33:15.938 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2446253 00:33:15.938 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2446253 00:33:15.938 { 00:33:15.938 "results": [ 00:33:15.938 { 00:33:15.938 "job": "Nvme0n1", 00:33:15.938 "core_mask": "0x4", 00:33:15.938 "workload": "verify", 00:33:15.938 "status": "terminated", 00:33:15.938 "verify_range": { 00:33:15.938 "start": 0, 00:33:15.938 "length": 16384 00:33:15.938 }, 00:33:15.938 "queue_depth": 128, 00:33:15.938 "io_size": 4096, 00:33:15.938 "runtime": 35.111278, 00:33:15.938 "iops": 7475.461303345324, 00:33:15.938 "mibps": 29.201020716192673, 00:33:15.938 "io_failed": 0, 00:33:15.938 "io_timeout": 0, 00:33:15.938 "avg_latency_us": 17096.641092032984, 00:33:15.938 "min_latency_us": 345.1984406905513, 00:33:15.938 "max_latency_us": 4036248.735103026 00:33:15.938 } 00:33:15.938 ], 00:33:15.938 "core_count": 1 00:33:15.938 } 00:33:16.200 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2446253 00:33:16.200 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:16.200 [2024-10-28 05:08:28.785575] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:33:16.200 [2024-10-28 05:08:28.785699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2446253 ] 00:33:16.200 [2024-10-28 05:08:28.921532] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:16.200 [2024-10-28 05:08:28.958491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.200 [2024-10-28 05:08:29.004840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.200 Running I/O for 90 seconds... 00:33:16.200 8225.00 IOPS, 32.13 MiB/s [2024-10-28T04:09:06.796Z] 8274.50 IOPS, 32.32 MiB/s [2024-10-28T04:09:06.796Z] 8264.00 IOPS, 32.28 MiB/s [2024-10-28T04:09:06.796Z] 8270.00 IOPS, 32.30 MiB/s [2024-10-28T04:09:06.796Z] 8263.60 IOPS, 32.28 MiB/s [2024-10-28T04:09:06.796Z] 8218.33 IOPS, 32.10 MiB/s [2024-10-28T04:09:06.796Z] 8155.29 IOPS, 31.86 MiB/s [2024-10-28T04:09:06.796Z] 8100.00 IOPS, 31.64 MiB/s [2024-10-28T04:09:06.796Z] 8046.22 IOPS, 31.43 MiB/s [2024-10-28T04:09:06.796Z] 8053.60 IOPS, 31.46 MiB/s [2024-10-28T04:09:06.796Z] 8090.45 IOPS, 31.60 MiB/s [2024-10-28T04:09:06.796Z] 8097.92 IOPS, 31.63 MiB/s [2024-10-28T04:09:06.796Z] 8107.85 IOPS, 31.67 MiB/s [2024-10-28T04:09:06.796Z] 8127.36 IOPS, 31.75 MiB/s [2024-10-28T04:09:06.796Z] 8140.07 IOPS, 31.80 MiB/s [2024-10-28T04:09:06.796Z] [2024-10-28 05:08:46.574750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.200 [2024-10-28 05:08:46.574824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.574884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.200 [2024-10-28 05:08:46.574905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.574931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.200 [2024-10-28 05:08:46.574958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.574997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.200 [2024-10-28 05:08:46.575013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.575049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.200 [2024-10-28 05:08:46.575064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.575088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.200 [2024-10-28 05:08:46.575104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.575125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.200 [2024-10-28 05:08:46.575143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.575165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.200 [2024-10-28 05:08:46.575181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.576330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.200 [2024-10-28 05:08:46.576367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.576397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.200 [2024-10-28 05:08:46.576416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.576440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.200 [2024-10-28 05:08:46.576457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.576480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.200 [2024-10-28 05:08:46.576496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.576519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.200 [2024-10-28 05:08:46.576536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.576559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.200 [2024-10-28 05:08:46.576590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.576614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.200 [2024-10-28 05:08:46.576630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.576681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.200 [2024-10-28 05:08:46.576698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.576722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.200 [2024-10-28 05:08:46.576739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.576761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.200 [2024-10-28 05:08:46.576778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.576801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.200 [2024-10-28 05:08:46.576818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.576842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.200 [2024-10-28 05:08:46.576859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.576884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.200 [2024-10-28 05:08:46.576900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.576945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.200 [2024-10-28 05:08:46.576962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:16.200 [2024-10-28 05:08:46.576985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.201 [2024-10-28 05:08:46.577367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.577964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.577980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.201 [2024-10-28 05:08:46.578777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:16.201 [2024-10-28 05:08:46.578799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.578815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.578838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.578855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.578878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.578894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.578918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.578934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.578972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.578988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.579959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.579975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:16.202 [2024-10-28 05:08:46.580698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.202 [2024-10-28 05:08:46.580714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.580740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.580756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.580782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.580798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.580824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.580839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.580866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.580888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.580930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.580947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.580972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.580989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.203 [2024-10-28 05:08:46.581071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.203 [2024-10-28 05:08:46.581111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:08:46.581816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:08:46.581833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:16.203 7791.44 IOPS, 30.44 MiB/s [2024-10-28T04:09:06.799Z] 7333.12 IOPS, 28.64 MiB/s [2024-10-28T04:09:06.799Z] 6925.72 IOPS, 27.05 MiB/s [2024-10-28T04:09:06.799Z] 6561.21 IOPS, 25.63 MiB/s [2024-10-28T04:09:06.799Z] 6496.95 IOPS, 25.38 MiB/s [2024-10-28T04:09:06.799Z] 6547.52 IOPS, 25.58 MiB/s [2024-10-28T04:09:06.799Z] 6599.32 IOPS, 25.78 MiB/s [2024-10-28T04:09:06.799Z] 6720.48 IOPS, 26.25 MiB/s [2024-10-28T04:09:06.799Z] 6856.29 IOPS, 26.78 MiB/s [2024-10-28T04:09:06.799Z] 6984.64 IOPS, 27.28 MiB/s [2024-10-28T04:09:06.799Z] 7054.65 IOPS, 27.56 MiB/s [2024-10-28T04:09:06.799Z] 7078.33 IOPS, 27.65 MiB/s [2024-10-28T04:09:06.799Z] 7095.25 IOPS, 27.72 MiB/s [2024-10-28T04:09:06.799Z] 7111.00 IOPS, 27.78 MiB/s [2024-10-28T04:09:06.799Z] 7202.00 IOPS, 28.13 MiB/s [2024-10-28T04:09:06.799Z] 7293.68 IOPS, 28.49 MiB/s [2024-10-28T04:09:06.799Z] 7379.72 IOPS, 28.83 MiB/s [2024-10-28T04:09:06.799Z] [2024-10-28 05:09:03.591112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.203 [2024-10-28 05:09:03.591186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:09:03.591264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.203 [2024-10-28 05:09:03.591285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:09:03.591309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.203 [2024-10-28 05:09:03.591325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:09:03.591346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.203 [2024-10-28 05:09:03.591362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:09:03.591382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.203 [2024-10-28 05:09:03.591399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:09:03.591421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:09:03.591437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:09:03.591458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.203 [2024-10-28 05:09:03.591474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:09:03.591496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.203 [2024-10-28 05:09:03.591512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:09:03.591534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.203 [2024-10-28 05:09:03.591549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:09:03.591570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.203 [2024-10-28 05:09:03.591587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:09:03.591609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.203 [2024-10-28 05:09:03.591625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:09:03.591654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:09:03.591673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:16.203 [2024-10-28 05:09:03.591695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.203 [2024-10-28 05:09:03.591712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.591733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.204 [2024-10-28 05:09:03.591754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.591777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.204 [2024-10-28 05:09:03.591793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.591814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.204 [2024-10-28 05:09:03.591829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.591849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.591864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.591886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.591902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.591923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.591938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.591958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.591974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.591994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.592010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.592030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.592045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.592065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.592080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.592102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.592117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.592137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.592152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.592173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.592192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.592214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.592230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.592251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.592265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.592286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.592302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.592322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.592338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.592358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.592374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.592396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.592412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.593876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.204 [2024-10-28 05:09:03.593903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.593939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.204 [2024-10-28 05:09:03.593999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.594024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.204 [2024-10-28 05:09:03.594041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.594063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.594080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.594103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.594118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.594155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.594172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.594200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.594232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.594254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.594269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.594291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.204 [2024-10-28 05:09:03.594306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:16.204 [2024-10-28 05:09:03.594327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.594342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.594363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.594378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.594398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.594414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.594434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.205 [2024-10-28 05:09:03.594449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.594469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.594484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.594505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.594520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.594541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.594556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.595960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.595983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.596000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.596022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.596038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:16.205 [2024-10-28 05:09:03.596060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.205 [2024-10-28 05:09:03.596077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:16.205 7427.64 IOPS, 29.01 MiB/s [2024-10-28T04:09:06.801Z] 7450.97 IOPS, 29.11 MiB/s [2024-10-28T04:09:06.801Z] 7476.60 IOPS, 29.21 MiB/s [2024-10-28T04:09:06.801Z] Received shutdown signal, test time was about 35.112088 seconds 00:33:16.205 00:33:16.205 Latency(us) 00:33:16.205 [2024-10-28T04:09:06.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.205 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:16.205 Verification LBA range: start 0x0 length 0x4000 00:33:16.205 Nvme0n1 : 35.11 7475.46 29.20 0.00 0.00 17096.64 345.20 4036248.74 00:33:16.205 [2024-10-28T04:09:06.801Z] =================================================================================================================== 00:33:16.205 [2024-10-28T04:09:06.801Z] Total : 7475.46 29.20 0.00 0.00 17096.64 345.20 4036248.74 00:33:16.205 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:16.464 rmmod nvme_tcp 00:33:16.464 rmmod nvme_fabrics 00:33:16.464 rmmod nvme_keyring 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 2445966 ']' 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 2445966 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2445966 ']' 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2445966 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2445966 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2445966' 00:33:16.464 killing process with pid 2445966 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2445966 00:33:16.464 05:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2445966 00:33:16.722 05:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:16.722 05:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:16.722 05:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:16.722 05:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:16.722 05:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:33:16.722 05:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:16.722 05:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:33:16.723 05:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:16.723 05:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:16.723 05:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.723 05:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:16.723 05:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.625 05:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:18.625 00:33:18.625 real 0m45.701s 00:33:18.625 user 2m15.477s 00:33:18.625 sys 0m12.234s 00:33:18.625 05:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:18.625 05:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:18.625 ************************************ 00:33:18.625 END TEST nvmf_host_multipath_status 00:33:18.625 ************************************ 00:33:18.884 05:09:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:18.884 05:09:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.885 ************************************ 00:33:18.885 START TEST nvmf_discovery_remove_ifc 00:33:18.885 ************************************ 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:18.885 * Looking for test storage... 00:33:18.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1689 -- # lcov --version 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:33:18.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.885 --rc genhtml_branch_coverage=1 00:33:18.885 --rc genhtml_function_coverage=1 00:33:18.885 --rc genhtml_legend=1 00:33:18.885 --rc geninfo_all_blocks=1 00:33:18.885 --rc geninfo_unexecuted_blocks=1 00:33:18.885 00:33:18.885 ' 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:33:18.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.885 --rc genhtml_branch_coverage=1 00:33:18.885 --rc genhtml_function_coverage=1 00:33:18.885 --rc genhtml_legend=1 00:33:18.885 --rc geninfo_all_blocks=1 00:33:18.885 --rc geninfo_unexecuted_blocks=1 00:33:18.885 00:33:18.885 ' 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:33:18.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.885 --rc genhtml_branch_coverage=1 00:33:18.885 --rc genhtml_function_coverage=1 00:33:18.885 --rc genhtml_legend=1 00:33:18.885 --rc geninfo_all_blocks=1 00:33:18.885 --rc geninfo_unexecuted_blocks=1 00:33:18.885 00:33:18.885 ' 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:33:18.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.885 --rc genhtml_branch_coverage=1 00:33:18.885 --rc genhtml_function_coverage=1 00:33:18.885 --rc genhtml_legend=1 00:33:18.885 --rc geninfo_all_blocks=1 00:33:18.885 --rc geninfo_unexecuted_blocks=1 00:33:18.885 00:33:18.885 ' 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.885 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:18.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:18.886 05:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:20.789 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:20.789 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:20.789 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:20.789 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:20.789 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:20.790 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:21.048 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:21.048 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:21.048 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:21.048 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:21.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:21.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:33:21.048 00:33:21.048 --- 10.0.0.2 ping statistics --- 00:33:21.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.049 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:21.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:21.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:33:21.049 00:33:21.049 --- 10.0.0.1 ping statistics --- 00:33:21.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.049 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=2453328 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 2453328 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2453328 ']' 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:21.049 05:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:21.049 [2024-10-28 05:09:11.490024] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:33:21.049 [2024-10-28 05:09:11.490097] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:21.049 [2024-10-28 05:09:11.627112] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:21.307 [2024-10-28 05:09:11.664956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.307 [2024-10-28 05:09:11.713794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:21.307 [2024-10-28 05:09:11.713850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:21.307 [2024-10-28 05:09:11.713866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:21.307 [2024-10-28 05:09:11.713889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:21.307 [2024-10-28 05:09:11.713899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:21.307 [2024-10-28 05:09:11.714491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:22.242 [2024-10-28 05:09:12.520086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:22.242 [2024-10-28 05:09:12.528276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:22.242 null0 00:33:22.242 [2024-10-28 05:09:12.560093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2453472 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2453472 /tmp/host.sock 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2453472 ']' 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:22.242 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:22.242 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:22.242 [2024-10-28 05:09:12.626901] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:33:22.242 [2024-10-28 05:09:12.626979] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2453472 ] 00:33:22.242 [2024-10-28 05:09:12.758391] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:22.242 [2024-10-28 05:09:12.798188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.502 [2024-10-28 05:09:12.848425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.502 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:22.502 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:22.502 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:22.502 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:22.502 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.502 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:22.502 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.502 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:22.502 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.502 05:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:22.502 05:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.502 05:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:22.502 05:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.502 05:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:23.904 [2024-10-28 05:09:14.065483] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:23.904 [2024-10-28 05:09:14.065522] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:23.904 [2024-10-28 05:09:14.065547] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:23.904 [2024-10-28 05:09:14.194663] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:23.904 [2024-10-28 05:09:14.252338] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:23.904 [2024-10-28 05:09:14.253375] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1668e00:1 started. 00:33:23.904 [2024-10-28 05:09:14.255290] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:23.904 [2024-10-28 05:09:14.255354] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:23.904 [2024-10-28 05:09:14.255396] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:23.904 [2024-10-28 05:09:14.255422] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:23.904 [2024-10-28 05:09:14.255455] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:23.904 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.904 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:23.904 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:23.904 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:23.904 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:23.904 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.904 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:23.904 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:23.904 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:23.904 [2024-10-28 05:09:14.262606] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1668e00 was disconnected and freed. delete nvme_qpair. 00:33:23.904 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.904 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:23.904 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:23.904 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:23.904 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:23.905 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:23.905 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:23.905 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.905 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:23.905 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:23.905 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:23.905 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:23.905 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.905 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:23.905 05:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:24.887 05:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:24.887 05:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:24.887 05:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:24.887 05:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.887 05:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:24.887 05:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:24.887 05:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:24.887 05:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.887 05:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:24.887 05:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:26.259 05:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:26.259 05:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:26.259 05:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.259 05:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:26.259 05:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:26.259 05:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:26.259 05:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:26.259 05:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.259 05:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:26.259 05:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:27.193 05:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:27.193 05:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:27.193 05:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.193 05:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:27.193 05:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:27.193 05:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:27.193 05:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:27.193 05:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.193 05:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:27.193 05:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:28.128 05:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:28.128 05:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:28.128 05:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:28.128 05:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.128 05:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:28.128 05:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:28.128 05:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:28.128 05:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.128 05:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:28.128 05:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:29.067 05:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:29.067 05:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:29.067 05:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:29.067 05:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.067 05:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:29.067 05:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:29.067 05:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:29.067 05:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.067 05:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:29.067 05:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:29.327 [2024-10-28 05:09:19.683522] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:29.327 [2024-10-28 05:09:19.683593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:29.327 [2024-10-28 05:09:19.683616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:29.327 [2024-10-28 05:09:19.683646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:29.327 [2024-10-28 05:09:19.683664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:29.327 [2024-10-28 05:09:19.683695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:29.327 [2024-10-28 05:09:19.683709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:29.327 [2024-10-28 05:09:19.683722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:29.327 [2024-10-28 05:09:19.683736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:29.327 [2024-10-28 05:09:19.683757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:29.327 [2024-10-28 05:09:19.683771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:29.327 [2024-10-28 05:09:19.683784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645680 is same with the state(6) to be set 00:33:29.327 [2024-10-28 05:09:19.693517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1645680 (9): Bad file descriptor 00:33:29.327 [2024-10-28 05:09:19.703536] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:29.327 [2024-10-28 05:09:19.703562] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:29.327 [2024-10-28 05:09:19.703573] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:29.327 [2024-10-28 05:09:19.703583] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:29.327 [2024-10-28 05:09:19.703619] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:30.267 05:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:30.267 05:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:30.267 05:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:30.267 05:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.267 05:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.267 05:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:30.267 05:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:30.267 [2024-10-28 05:09:20.740695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:30.267 [2024-10-28 05:09:20.740768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1645680 with addr=10.0.0.2, port=4420 00:33:30.267 [2024-10-28 05:09:20.740798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645680 is same with the state(6) to be set 00:33:30.267 [2024-10-28 05:09:20.740849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1645680 (9): Bad file descriptor 00:33:30.267 [2024-10-28 05:09:20.741334] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:30.267 [2024-10-28 05:09:20.741383] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:30.267 [2024-10-28 05:09:20.741402] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:30.267 [2024-10-28 05:09:20.741420] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:30.267 [2024-10-28 05:09:20.741435] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:30.267 [2024-10-28 05:09:20.741447] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:30.267 [2024-10-28 05:09:20.741474] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:30.267 [2024-10-28 05:09:20.741492] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:30.267 [2024-10-28 05:09:20.741503] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:30.267 05:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.267 05:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:30.267 05:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:31.205 [2024-10-28 05:09:21.741582] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:31.205 [2024-10-28 05:09:21.741619] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:31.205 [2024-10-28 05:09:21.741651] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:31.205 [2024-10-28 05:09:21.741669] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:31.205 [2024-10-28 05:09:21.741698] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:31.205 [2024-10-28 05:09:21.741711] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:31.205 [2024-10-28 05:09:21.741720] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:31.205 [2024-10-28 05:09:21.741739] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:31.205 [2024-10-28 05:09:21.741774] bdev_nvme.c:7042:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:31.205 [2024-10-28 05:09:21.741813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:31.205 [2024-10-28 05:09:21.741833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.205 [2024-10-28 05:09:21.741852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:31.205 [2024-10-28 05:09:21.741865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.206 [2024-10-28 05:09:21.741878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:31.206 [2024-10-28 05:09:21.741890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.206 [2024-10-28 05:09:21.741903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:31.206 [2024-10-28 05:09:21.741933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.206 [2024-10-28 05:09:21.741949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:31.206 [2024-10-28 05:09:21.741964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.206 [2024-10-28 05:09:21.741978] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:31.206 [2024-10-28 05:09:21.742077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1634dc0 (9): Bad file descriptor 00:33:31.206 [2024-10-28 05:09:21.743102] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:31.206 [2024-10-28 05:09:21.743126] nvme_ctrlr.c:1190:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:31.206 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:31.206 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:31.206 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:31.206 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.206 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.206 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:31.206 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:31.206 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.465 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:31.465 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:31.465 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:31.465 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:31.465 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:31.465 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:31.465 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:31.465 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.465 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:31.465 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.465 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:31.465 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.465 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:31.465 05:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:32.406 05:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:32.406 05:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:32.406 05:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:32.406 05:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.406 05:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.406 05:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:32.406 05:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:32.406 05:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.406 05:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:32.406 05:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:33.346 [2024-10-28 05:09:23.797861] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:33.346 [2024-10-28 05:09:23.797897] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:33.346 [2024-10-28 05:09:23.797932] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:33.346 [2024-10-28 05:09:23.885009] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:33.346 05:09:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:33.346 05:09:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:33.606 05:09:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:33.606 05:09:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.606 05:09:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:33.606 05:09:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.606 05:09:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:33.606 05:09:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.606 05:09:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:33.606 05:09:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:33.606 [2024-10-28 05:09:23.985612] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:33.606 [2024-10-28 05:09:23.986421] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x16499a0:1 started. 00:33:33.606 [2024-10-28 05:09:23.987874] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:33.606 [2024-10-28 05:09:23.987930] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:33.606 [2024-10-28 05:09:23.987961] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:33.606 [2024-10-28 05:09:23.987995] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:33.606 [2024-10-28 05:09:23.988011] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:33.606 [2024-10-28 05:09:23.994611] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x16499a0 was disconnected and freed. delete nvme_qpair. 00:33:34.542 05:09:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:34.542 05:09:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:34.542 05:09:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:34.542 05:09:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.542 05:09:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.542 05:09:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:34.542 05:09:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:34.542 05:09:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.542 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:34.542 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:34.542 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2453472 00:33:34.542 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2453472 ']' 00:33:34.542 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2453472 00:33:34.542 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:34.542 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:34.542 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2453472 00:33:34.542 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:34.542 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:34.542 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2453472' 00:33:34.542 killing process with pid 2453472 00:33:34.542 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2453472 00:33:34.542 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2453472 00:33:34.800 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:34.800 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:34.800 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:34.800 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:34.800 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:34.800 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:34.800 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:34.800 rmmod nvme_tcp 00:33:34.800 rmmod nvme_fabrics 00:33:34.800 rmmod nvme_keyring 00:33:34.800 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:34.800 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:34.800 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:34.800 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 2453328 ']' 00:33:34.800 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 2453328 00:33:34.801 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2453328 ']' 00:33:34.801 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2453328 00:33:34.801 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:34.801 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:34.801 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2453328 00:33:34.801 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:34.801 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:34.801 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2453328' 00:33:34.801 killing process with pid 2453328 00:33:34.801 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2453328 00:33:34.801 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2453328 00:33:35.061 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:35.061 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:35.061 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:35.061 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:35.061 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:33:35.061 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:35.061 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:33:35.061 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:35.061 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:35.061 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.061 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:35.061 05:09:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.972 05:09:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:36.972 00:33:36.972 real 0m18.310s 00:33:36.972 user 0m26.433s 00:33:36.972 sys 0m2.971s 00:33:36.972 05:09:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:36.972 05:09:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.972 ************************************ 00:33:36.972 END TEST nvmf_discovery_remove_ifc 00:33:36.972 ************************************ 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.232 ************************************ 00:33:37.232 START TEST nvmf_identify_kernel_target 00:33:37.232 ************************************ 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:37.232 * Looking for test storage... 00:33:37.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1689 -- # lcov --version 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:33:37.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.232 --rc genhtml_branch_coverage=1 00:33:37.232 --rc genhtml_function_coverage=1 00:33:37.232 --rc genhtml_legend=1 00:33:37.232 --rc geninfo_all_blocks=1 00:33:37.232 --rc geninfo_unexecuted_blocks=1 00:33:37.232 00:33:37.232 ' 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:33:37.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.232 --rc genhtml_branch_coverage=1 00:33:37.232 --rc genhtml_function_coverage=1 00:33:37.232 --rc genhtml_legend=1 00:33:37.232 --rc geninfo_all_blocks=1 00:33:37.232 --rc geninfo_unexecuted_blocks=1 00:33:37.232 00:33:37.232 ' 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:33:37.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.232 --rc genhtml_branch_coverage=1 00:33:37.232 --rc genhtml_function_coverage=1 00:33:37.232 --rc genhtml_legend=1 00:33:37.232 --rc geninfo_all_blocks=1 00:33:37.232 --rc geninfo_unexecuted_blocks=1 00:33:37.232 00:33:37.232 ' 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:33:37.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.232 --rc genhtml_branch_coverage=1 00:33:37.232 --rc genhtml_function_coverage=1 00:33:37.232 --rc genhtml_legend=1 00:33:37.232 --rc geninfo_all_blocks=1 00:33:37.232 --rc geninfo_unexecuted_blocks=1 00:33:37.232 00:33:37.232 ' 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.232 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:37.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:37.233 05:09:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:39.150 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:39.150 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:39.151 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:39.151 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:39.151 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:39.151 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:39.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:33:39.418 00:33:39.418 --- 10.0.0.2 ping statistics --- 00:33:39.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.418 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:39.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:33:39.418 00:33:39.418 --- 10.0.0.1 ping statistics --- 00:33:39.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.418 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:39.418 05:09:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:40.355 Waiting for block devices as requested 00:33:40.614 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:40.614 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:40.875 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:40.875 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:40.875 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:40.875 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:41.134 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:41.134 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:41.134 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:41.134 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:41.393 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:41.393 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:41.393 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:41.654 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:41.654 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:41.654 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:41.654 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:41.913 No valid GPT data, bailing 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:41.913 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:42.173 00:33:42.173 Discovery Log Number of Records 2, Generation counter 2 00:33:42.173 =====Discovery Log Entry 0====== 00:33:42.173 trtype: tcp 00:33:42.173 adrfam: ipv4 00:33:42.173 subtype: current discovery subsystem 00:33:42.173 treq: not specified, sq flow control disable supported 00:33:42.173 portid: 1 00:33:42.173 trsvcid: 4420 00:33:42.173 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:42.173 traddr: 10.0.0.1 00:33:42.173 eflags: none 00:33:42.173 sectype: none 00:33:42.173 =====Discovery Log Entry 1====== 00:33:42.173 trtype: tcp 00:33:42.173 adrfam: ipv4 00:33:42.173 subtype: nvme subsystem 00:33:42.173 treq: not specified, sq flow control disable supported 00:33:42.173 portid: 1 00:33:42.173 trsvcid: 4420 00:33:42.173 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:42.173 traddr: 10.0.0.1 00:33:42.173 eflags: none 00:33:42.173 sectype: none 00:33:42.173 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:42.173 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:42.173 ===================================================== 00:33:42.173 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:42.173 ===================================================== 00:33:42.173 Controller Capabilities/Features 00:33:42.173 ================================ 00:33:42.173 Vendor ID: 0000 00:33:42.173 Subsystem Vendor ID: 0000 00:33:42.173 Serial Number: c3a8d454620b611a771a 00:33:42.173 Model Number: Linux 00:33:42.173 Firmware Version: 6.8.9-20 00:33:42.173 Recommended Arb Burst: 0 00:33:42.173 IEEE OUI Identifier: 00 00 00 00:33:42.173 Multi-path I/O 00:33:42.173 May have multiple subsystem ports: No 00:33:42.173 May have multiple controllers: No 00:33:42.173 Associated with SR-IOV VF: No 00:33:42.173 Max Data Transfer Size: Unlimited 00:33:42.173 Max Number of Namespaces: 0 00:33:42.173 Max Number of I/O Queues: 1024 00:33:42.173 NVMe Specification Version (VS): 1.3 00:33:42.173 NVMe Specification Version (Identify): 1.3 00:33:42.173 Maximum Queue Entries: 1024 00:33:42.173 Contiguous Queues Required: No 00:33:42.173 Arbitration Mechanisms Supported 00:33:42.173 Weighted Round Robin: Not Supported 00:33:42.173 Vendor Specific: Not Supported 00:33:42.173 Reset Timeout: 7500 ms 00:33:42.173 Doorbell Stride: 4 bytes 00:33:42.173 NVM Subsystem Reset: Not Supported 00:33:42.173 Command Sets Supported 00:33:42.173 NVM Command Set: Supported 00:33:42.173 Boot Partition: Not Supported 00:33:42.173 Memory Page Size Minimum: 4096 bytes 00:33:42.173 Memory Page Size Maximum: 4096 bytes 00:33:42.173 Persistent Memory Region: Not Supported 00:33:42.173 Optional Asynchronous Events Supported 00:33:42.173 Namespace Attribute Notices: Not Supported 00:33:42.173 Firmware Activation Notices: Not Supported 00:33:42.173 ANA Change Notices: Not Supported 00:33:42.173 PLE Aggregate Log Change Notices: Not Supported 00:33:42.173 LBA Status Info Alert Notices: Not Supported 00:33:42.173 EGE Aggregate Log Change Notices: Not Supported 00:33:42.173 Normal NVM Subsystem Shutdown event: Not Supported 00:33:42.173 Zone Descriptor Change Notices: Not Supported 00:33:42.173 Discovery Log Change Notices: Supported 00:33:42.173 Controller Attributes 00:33:42.173 128-bit Host Identifier: Not Supported 00:33:42.173 Non-Operational Permissive Mode: Not Supported 00:33:42.173 NVM Sets: Not Supported 00:33:42.173 Read Recovery Levels: Not Supported 00:33:42.173 Endurance Groups: Not Supported 00:33:42.173 Predictable Latency Mode: Not Supported 00:33:42.173 Traffic Based Keep ALive: Not Supported 00:33:42.173 Namespace Granularity: Not Supported 00:33:42.173 SQ Associations: Not Supported 00:33:42.173 UUID List: Not Supported 00:33:42.173 Multi-Domain Subsystem: Not Supported 00:33:42.173 Fixed Capacity Management: Not Supported 00:33:42.173 Variable Capacity Management: Not Supported 00:33:42.173 Delete Endurance Group: Not Supported 00:33:42.173 Delete NVM Set: Not Supported 00:33:42.173 Extended LBA Formats Supported: Not Supported 00:33:42.173 Flexible Data Placement Supported: Not Supported 00:33:42.173 00:33:42.173 Controller Memory Buffer Support 00:33:42.173 ================================ 00:33:42.173 Supported: No 00:33:42.173 00:33:42.173 Persistent Memory Region Support 00:33:42.173 ================================ 00:33:42.173 Supported: No 00:33:42.173 00:33:42.173 Admin Command Set Attributes 00:33:42.173 ============================ 00:33:42.173 Security Send/Receive: Not Supported 00:33:42.173 Format NVM: Not Supported 00:33:42.173 Firmware Activate/Download: Not Supported 00:33:42.173 Namespace Management: Not Supported 00:33:42.173 Device Self-Test: Not Supported 00:33:42.173 Directives: Not Supported 00:33:42.173 NVMe-MI: Not Supported 00:33:42.173 Virtualization Management: Not Supported 00:33:42.173 Doorbell Buffer Config: Not Supported 00:33:42.173 Get LBA Status Capability: Not Supported 00:33:42.173 Command & Feature Lockdown Capability: Not Supported 00:33:42.173 Abort Command Limit: 1 00:33:42.173 Async Event Request Limit: 1 00:33:42.173 Number of Firmware Slots: N/A 00:33:42.173 Firmware Slot 1 Read-Only: N/A 00:33:42.173 Firmware Activation Without Reset: N/A 00:33:42.173 Multiple Update Detection Support: N/A 00:33:42.173 Firmware Update Granularity: No Information Provided 00:33:42.173 Per-Namespace SMART Log: No 00:33:42.173 Asymmetric Namespace Access Log Page: Not Supported 00:33:42.173 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:42.173 Command Effects Log Page: Not Supported 00:33:42.173 Get Log Page Extended Data: Supported 00:33:42.173 Telemetry Log Pages: Not Supported 00:33:42.173 Persistent Event Log Pages: Not Supported 00:33:42.173 Supported Log Pages Log Page: May Support 00:33:42.173 Commands Supported & Effects Log Page: Not Supported 00:33:42.173 Feature Identifiers & Effects Log Page:May Support 00:33:42.173 NVMe-MI Commands & Effects Log Page: May Support 00:33:42.173 Data Area 4 for Telemetry Log: Not Supported 00:33:42.173 Error Log Page Entries Supported: 1 00:33:42.174 Keep Alive: Not Supported 00:33:42.174 00:33:42.174 NVM Command Set Attributes 00:33:42.174 ========================== 00:33:42.174 Submission Queue Entry Size 00:33:42.174 Max: 1 00:33:42.174 Min: 1 00:33:42.174 Completion Queue Entry Size 00:33:42.174 Max: 1 00:33:42.174 Min: 1 00:33:42.174 Number of Namespaces: 0 00:33:42.174 Compare Command: Not Supported 00:33:42.174 Write Uncorrectable Command: Not Supported 00:33:42.174 Dataset Management Command: Not Supported 00:33:42.174 Write Zeroes Command: Not Supported 00:33:42.174 Set Features Save Field: Not Supported 00:33:42.174 Reservations: Not Supported 00:33:42.174 Timestamp: Not Supported 00:33:42.174 Copy: Not Supported 00:33:42.174 Volatile Write Cache: Not Present 00:33:42.174 Atomic Write Unit (Normal): 1 00:33:42.174 Atomic Write Unit (PFail): 1 00:33:42.174 Atomic Compare & Write Unit: 1 00:33:42.174 Fused Compare & Write: Not Supported 00:33:42.174 Scatter-Gather List 00:33:42.174 SGL Command Set: Supported 00:33:42.174 SGL Keyed: Not Supported 00:33:42.174 SGL Bit Bucket Descriptor: Not Supported 00:33:42.174 SGL Metadata Pointer: Not Supported 00:33:42.174 Oversized SGL: Not Supported 00:33:42.174 SGL Metadata Address: Not Supported 00:33:42.174 SGL Offset: Supported 00:33:42.174 Transport SGL Data Block: Not Supported 00:33:42.174 Replay Protected Memory Block: Not Supported 00:33:42.174 00:33:42.174 Firmware Slot Information 00:33:42.174 ========================= 00:33:42.174 Active slot: 0 00:33:42.174 00:33:42.174 00:33:42.174 Error Log 00:33:42.174 ========= 00:33:42.174 00:33:42.174 Active Namespaces 00:33:42.174 ================= 00:33:42.174 Discovery Log Page 00:33:42.174 ================== 00:33:42.174 Generation Counter: 2 00:33:42.174 Number of Records: 2 00:33:42.174 Record Format: 0 00:33:42.174 00:33:42.174 Discovery Log Entry 0 00:33:42.174 ---------------------- 00:33:42.174 Transport Type: 3 (TCP) 00:33:42.174 Address Family: 1 (IPv4) 00:33:42.174 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:42.174 Entry Flags: 00:33:42.174 Duplicate Returned Information: 0 00:33:42.174 Explicit Persistent Connection Support for Discovery: 0 00:33:42.174 Transport Requirements: 00:33:42.174 Secure Channel: Not Specified 00:33:42.174 Port ID: 1 (0x0001) 00:33:42.174 Controller ID: 65535 (0xffff) 00:33:42.174 Admin Max SQ Size: 32 00:33:42.174 Transport Service Identifier: 4420 00:33:42.174 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:42.174 Transport Address: 10.0.0.1 00:33:42.174 Discovery Log Entry 1 00:33:42.174 ---------------------- 00:33:42.174 Transport Type: 3 (TCP) 00:33:42.174 Address Family: 1 (IPv4) 00:33:42.174 Subsystem Type: 2 (NVM Subsystem) 00:33:42.174 Entry Flags: 00:33:42.174 Duplicate Returned Information: 0 00:33:42.174 Explicit Persistent Connection Support for Discovery: 0 00:33:42.174 Transport Requirements: 00:33:42.174 Secure Channel: Not Specified 00:33:42.174 Port ID: 1 (0x0001) 00:33:42.174 Controller ID: 65535 (0xffff) 00:33:42.174 Admin Max SQ Size: 32 00:33:42.174 Transport Service Identifier: 4420 00:33:42.174 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:42.174 Transport Address: 10.0.0.1 00:33:42.174 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:42.434 get_feature(0x01) failed 00:33:42.434 get_feature(0x02) failed 00:33:42.434 get_feature(0x04) failed 00:33:42.434 ===================================================== 00:33:42.434 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:42.434 ===================================================== 00:33:42.434 Controller Capabilities/Features 00:33:42.434 ================================ 00:33:42.434 Vendor ID: 0000 00:33:42.434 Subsystem Vendor ID: 0000 00:33:42.434 Serial Number: 1777e8e4bd8a2073de9a 00:33:42.434 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:42.434 Firmware Version: 6.8.9-20 00:33:42.434 Recommended Arb Burst: 6 00:33:42.434 IEEE OUI Identifier: 00 00 00 00:33:42.434 Multi-path I/O 00:33:42.434 May have multiple subsystem ports: Yes 00:33:42.434 May have multiple controllers: Yes 00:33:42.434 Associated with SR-IOV VF: No 00:33:42.434 Max Data Transfer Size: Unlimited 00:33:42.434 Max Number of Namespaces: 1024 00:33:42.434 Max Number of I/O Queues: 128 00:33:42.434 NVMe Specification Version (VS): 1.3 00:33:42.434 NVMe Specification Version (Identify): 1.3 00:33:42.434 Maximum Queue Entries: 1024 00:33:42.434 Contiguous Queues Required: No 00:33:42.434 Arbitration Mechanisms Supported 00:33:42.434 Weighted Round Robin: Not Supported 00:33:42.434 Vendor Specific: Not Supported 00:33:42.434 Reset Timeout: 7500 ms 00:33:42.434 Doorbell Stride: 4 bytes 00:33:42.434 NVM Subsystem Reset: Not Supported 00:33:42.434 Command Sets Supported 00:33:42.434 NVM Command Set: Supported 00:33:42.434 Boot Partition: Not Supported 00:33:42.434 Memory Page Size Minimum: 4096 bytes 00:33:42.434 Memory Page Size Maximum: 4096 bytes 00:33:42.434 Persistent Memory Region: Not Supported 00:33:42.434 Optional Asynchronous Events Supported 00:33:42.434 Namespace Attribute Notices: Supported 00:33:42.434 Firmware Activation Notices: Not Supported 00:33:42.434 ANA Change Notices: Supported 00:33:42.434 PLE Aggregate Log Change Notices: Not Supported 00:33:42.434 LBA Status Info Alert Notices: Not Supported 00:33:42.434 EGE Aggregate Log Change Notices: Not Supported 00:33:42.434 Normal NVM Subsystem Shutdown event: Not Supported 00:33:42.434 Zone Descriptor Change Notices: Not Supported 00:33:42.434 Discovery Log Change Notices: Not Supported 00:33:42.434 Controller Attributes 00:33:42.434 128-bit Host Identifier: Supported 00:33:42.434 Non-Operational Permissive Mode: Not Supported 00:33:42.434 NVM Sets: Not Supported 00:33:42.434 Read Recovery Levels: Not Supported 00:33:42.434 Endurance Groups: Not Supported 00:33:42.434 Predictable Latency Mode: Not Supported 00:33:42.434 Traffic Based Keep ALive: Supported 00:33:42.434 Namespace Granularity: Not Supported 00:33:42.434 SQ Associations: Not Supported 00:33:42.434 UUID List: Not Supported 00:33:42.434 Multi-Domain Subsystem: Not Supported 00:33:42.434 Fixed Capacity Management: Not Supported 00:33:42.434 Variable Capacity Management: Not Supported 00:33:42.434 Delete Endurance Group: Not Supported 00:33:42.434 Delete NVM Set: Not Supported 00:33:42.434 Extended LBA Formats Supported: Not Supported 00:33:42.434 Flexible Data Placement Supported: Not Supported 00:33:42.434 00:33:42.434 Controller Memory Buffer Support 00:33:42.434 ================================ 00:33:42.434 Supported: No 00:33:42.434 00:33:42.434 Persistent Memory Region Support 00:33:42.434 ================================ 00:33:42.434 Supported: No 00:33:42.434 00:33:42.434 Admin Command Set Attributes 00:33:42.434 ============================ 00:33:42.434 Security Send/Receive: Not Supported 00:33:42.434 Format NVM: Not Supported 00:33:42.434 Firmware Activate/Download: Not Supported 00:33:42.434 Namespace Management: Not Supported 00:33:42.434 Device Self-Test: Not Supported 00:33:42.434 Directives: Not Supported 00:33:42.434 NVMe-MI: Not Supported 00:33:42.434 Virtualization Management: Not Supported 00:33:42.434 Doorbell Buffer Config: Not Supported 00:33:42.434 Get LBA Status Capability: Not Supported 00:33:42.434 Command & Feature Lockdown Capability: Not Supported 00:33:42.434 Abort Command Limit: 4 00:33:42.434 Async Event Request Limit: 4 00:33:42.434 Number of Firmware Slots: N/A 00:33:42.434 Firmware Slot 1 Read-Only: N/A 00:33:42.434 Firmware Activation Without Reset: N/A 00:33:42.434 Multiple Update Detection Support: N/A 00:33:42.434 Firmware Update Granularity: No Information Provided 00:33:42.434 Per-Namespace SMART Log: Yes 00:33:42.434 Asymmetric Namespace Access Log Page: Supported 00:33:42.434 ANA Transition Time : 10 sec 00:33:42.434 00:33:42.435 Asymmetric Namespace Access Capabilities 00:33:42.435 ANA Optimized State : Supported 00:33:42.435 ANA Non-Optimized State : Supported 00:33:42.435 ANA Inaccessible State : Supported 00:33:42.435 ANA Persistent Loss State : Supported 00:33:42.435 ANA Change State : Supported 00:33:42.435 ANAGRPID is not changed : No 00:33:42.435 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:42.435 00:33:42.435 ANA Group Identifier Maximum : 128 00:33:42.435 Number of ANA Group Identifiers : 128 00:33:42.435 Max Number of Allowed Namespaces : 1024 00:33:42.435 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:42.435 Command Effects Log Page: Supported 00:33:42.435 Get Log Page Extended Data: Supported 00:33:42.435 Telemetry Log Pages: Not Supported 00:33:42.435 Persistent Event Log Pages: Not Supported 00:33:42.435 Supported Log Pages Log Page: May Support 00:33:42.435 Commands Supported & Effects Log Page: Not Supported 00:33:42.435 Feature Identifiers & Effects Log Page:May Support 00:33:42.435 NVMe-MI Commands & Effects Log Page: May Support 00:33:42.435 Data Area 4 for Telemetry Log: Not Supported 00:33:42.435 Error Log Page Entries Supported: 128 00:33:42.435 Keep Alive: Supported 00:33:42.435 Keep Alive Granularity: 1000 ms 00:33:42.435 00:33:42.435 NVM Command Set Attributes 00:33:42.435 ========================== 00:33:42.435 Submission Queue Entry Size 00:33:42.435 Max: 64 00:33:42.435 Min: 64 00:33:42.435 Completion Queue Entry Size 00:33:42.435 Max: 16 00:33:42.435 Min: 16 00:33:42.435 Number of Namespaces: 1024 00:33:42.435 Compare Command: Not Supported 00:33:42.435 Write Uncorrectable Command: Not Supported 00:33:42.435 Dataset Management Command: Supported 00:33:42.435 Write Zeroes Command: Supported 00:33:42.435 Set Features Save Field: Not Supported 00:33:42.435 Reservations: Not Supported 00:33:42.435 Timestamp: Not Supported 00:33:42.435 Copy: Not Supported 00:33:42.435 Volatile Write Cache: Present 00:33:42.435 Atomic Write Unit (Normal): 1 00:33:42.435 Atomic Write Unit (PFail): 1 00:33:42.435 Atomic Compare & Write Unit: 1 00:33:42.435 Fused Compare & Write: Not Supported 00:33:42.435 Scatter-Gather List 00:33:42.435 SGL Command Set: Supported 00:33:42.435 SGL Keyed: Not Supported 00:33:42.435 SGL Bit Bucket Descriptor: Not Supported 00:33:42.435 SGL Metadata Pointer: Not Supported 00:33:42.435 Oversized SGL: Not Supported 00:33:42.435 SGL Metadata Address: Not Supported 00:33:42.435 SGL Offset: Supported 00:33:42.435 Transport SGL Data Block: Not Supported 00:33:42.435 Replay Protected Memory Block: Not Supported 00:33:42.435 00:33:42.435 Firmware Slot Information 00:33:42.435 ========================= 00:33:42.435 Active slot: 0 00:33:42.435 00:33:42.435 Asymmetric Namespace Access 00:33:42.435 =========================== 00:33:42.435 Change Count : 0 00:33:42.435 Number of ANA Group Descriptors : 1 00:33:42.435 ANA Group Descriptor : 0 00:33:42.435 ANA Group ID : 1 00:33:42.435 Number of NSID Values : 1 00:33:42.435 Change Count : 0 00:33:42.435 ANA State : 1 00:33:42.435 Namespace Identifier : 1 00:33:42.435 00:33:42.435 Commands Supported and Effects 00:33:42.435 ============================== 00:33:42.435 Admin Commands 00:33:42.435 -------------- 00:33:42.435 Get Log Page (02h): Supported 00:33:42.435 Identify (06h): Supported 00:33:42.435 Abort (08h): Supported 00:33:42.435 Set Features (09h): Supported 00:33:42.435 Get Features (0Ah): Supported 00:33:42.435 Asynchronous Event Request (0Ch): Supported 00:33:42.435 Keep Alive (18h): Supported 00:33:42.435 I/O Commands 00:33:42.435 ------------ 00:33:42.435 Flush (00h): Supported 00:33:42.435 Write (01h): Supported LBA-Change 00:33:42.435 Read (02h): Supported 00:33:42.435 Write Zeroes (08h): Supported LBA-Change 00:33:42.435 Dataset Management (09h): Supported 00:33:42.435 00:33:42.435 Error Log 00:33:42.435 ========= 00:33:42.435 Entry: 0 00:33:42.435 Error Count: 0x3 00:33:42.435 Submission Queue Id: 0x0 00:33:42.435 Command Id: 0x5 00:33:42.435 Phase Bit: 0 00:33:42.435 Status Code: 0x2 00:33:42.435 Status Code Type: 0x0 00:33:42.435 Do Not Retry: 1 00:33:42.435 Error Location: 0x28 00:33:42.435 LBA: 0x0 00:33:42.435 Namespace: 0x0 00:33:42.435 Vendor Log Page: 0x0 00:33:42.435 ----------- 00:33:42.435 Entry: 1 00:33:42.435 Error Count: 0x2 00:33:42.435 Submission Queue Id: 0x0 00:33:42.435 Command Id: 0x5 00:33:42.435 Phase Bit: 0 00:33:42.435 Status Code: 0x2 00:33:42.435 Status Code Type: 0x0 00:33:42.435 Do Not Retry: 1 00:33:42.435 Error Location: 0x28 00:33:42.435 LBA: 0x0 00:33:42.435 Namespace: 0x0 00:33:42.435 Vendor Log Page: 0x0 00:33:42.435 ----------- 00:33:42.435 Entry: 2 00:33:42.435 Error Count: 0x1 00:33:42.435 Submission Queue Id: 0x0 00:33:42.435 Command Id: 0x4 00:33:42.435 Phase Bit: 0 00:33:42.435 Status Code: 0x2 00:33:42.435 Status Code Type: 0x0 00:33:42.435 Do Not Retry: 1 00:33:42.435 Error Location: 0x28 00:33:42.435 LBA: 0x0 00:33:42.435 Namespace: 0x0 00:33:42.435 Vendor Log Page: 0x0 00:33:42.435 00:33:42.435 Number of Queues 00:33:42.435 ================ 00:33:42.435 Number of I/O Submission Queues: 128 00:33:42.435 Number of I/O Completion Queues: 128 00:33:42.435 00:33:42.435 ZNS Specific Controller Data 00:33:42.435 ============================ 00:33:42.435 Zone Append Size Limit: 0 00:33:42.435 00:33:42.435 00:33:42.435 Active Namespaces 00:33:42.435 ================= 00:33:42.435 get_feature(0x05) failed 00:33:42.435 Namespace ID:1 00:33:42.435 Command Set Identifier: NVM (00h) 00:33:42.435 Deallocate: Supported 00:33:42.435 Deallocated/Unwritten Error: Not Supported 00:33:42.435 Deallocated Read Value: Unknown 00:33:42.435 Deallocate in Write Zeroes: Not Supported 00:33:42.435 Deallocated Guard Field: 0xFFFF 00:33:42.435 Flush: Supported 00:33:42.435 Reservation: Not Supported 00:33:42.435 Namespace Sharing Capabilities: Multiple Controllers 00:33:42.435 Size (in LBAs): 1953525168 (931GiB) 00:33:42.435 Capacity (in LBAs): 1953525168 (931GiB) 00:33:42.435 Utilization (in LBAs): 1953525168 (931GiB) 00:33:42.435 UUID: 0f4b1b76-c9e1-42c3-9452-fa7763863a80 00:33:42.435 Thin Provisioning: Not Supported 00:33:42.435 Per-NS Atomic Units: Yes 00:33:42.435 Atomic Boundary Size (Normal): 0 00:33:42.435 Atomic Boundary Size (PFail): 0 00:33:42.435 Atomic Boundary Offset: 0 00:33:42.435 NGUID/EUI64 Never Reused: No 00:33:42.435 ANA group ID: 1 00:33:42.435 Namespace Write Protected: No 00:33:42.435 Number of LBA Formats: 1 00:33:42.435 Current LBA Format: LBA Format #00 00:33:42.435 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:42.435 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:42.435 rmmod nvme_tcp 00:33:42.435 rmmod nvme_fabrics 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:42.435 05:09:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.964 05:09:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:44.964 05:09:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:44.964 05:09:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:44.964 05:09:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:33:44.964 05:09:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:44.964 05:09:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:44.964 05:09:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:44.964 05:09:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:44.964 05:09:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:33:44.964 05:09:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:33:44.964 05:09:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:45.898 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:45.898 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:45.898 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:45.898 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:45.898 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:45.898 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:45.898 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:45.898 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:45.898 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:45.898 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:45.898 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:45.898 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:45.898 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:45.898 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:45.898 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:45.898 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:46.832 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:47.092 00:33:47.092 real 0m9.853s 00:33:47.092 user 0m2.135s 00:33:47.092 sys 0m3.501s 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.092 ************************************ 00:33:47.092 END TEST nvmf_identify_kernel_target 00:33:47.092 ************************************ 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.092 ************************************ 00:33:47.092 START TEST nvmf_auth_host 00:33:47.092 ************************************ 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:47.092 * Looking for test storage... 00:33:47.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1689 -- # lcov --version 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:33:47.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.092 --rc genhtml_branch_coverage=1 00:33:47.092 --rc genhtml_function_coverage=1 00:33:47.092 --rc genhtml_legend=1 00:33:47.092 --rc geninfo_all_blocks=1 00:33:47.092 --rc geninfo_unexecuted_blocks=1 00:33:47.092 00:33:47.092 ' 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:33:47.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.092 --rc genhtml_branch_coverage=1 00:33:47.092 --rc genhtml_function_coverage=1 00:33:47.092 --rc genhtml_legend=1 00:33:47.092 --rc geninfo_all_blocks=1 00:33:47.092 --rc geninfo_unexecuted_blocks=1 00:33:47.092 00:33:47.092 ' 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:33:47.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.092 --rc genhtml_branch_coverage=1 00:33:47.092 --rc genhtml_function_coverage=1 00:33:47.092 --rc genhtml_legend=1 00:33:47.092 --rc geninfo_all_blocks=1 00:33:47.092 --rc geninfo_unexecuted_blocks=1 00:33:47.092 00:33:47.092 ' 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:33:47.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.092 --rc genhtml_branch_coverage=1 00:33:47.092 --rc genhtml_function_coverage=1 00:33:47.092 --rc genhtml_legend=1 00:33:47.092 --rc geninfo_all_blocks=1 00:33:47.092 --rc geninfo_unexecuted_blocks=1 00:33:47.092 00:33:47.092 ' 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.092 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:47.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:47.093 05:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:48.992 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:48.992 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:48.992 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:48.992 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:48.993 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:48.993 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:49.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:49.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:33:49.250 00:33:49.250 --- 10.0.0.2 ping statistics --- 00:33:49.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.250 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:49.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:49.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:33:49.250 00:33:49.250 --- 10.0.0.1 ping statistics --- 00:33:49.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.250 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=2460610 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 2460610 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2460610 ']' 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:49.250 05:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=eb05493ffac5cc6bae615ccf4cbcf796 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.93k 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key eb05493ffac5cc6bae615ccf4cbcf796 0 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 eb05493ffac5cc6bae615ccf4cbcf796 0 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=eb05493ffac5cc6bae615ccf4cbcf796 00:33:50.625 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.93k 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.93k 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.93k 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=84a91675e465405dd389a95c7a141bc41f564b97a9bba47a346213881c2b70f0 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.rtT 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 84a91675e465405dd389a95c7a141bc41f564b97a9bba47a346213881c2b70f0 3 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 84a91675e465405dd389a95c7a141bc41f564b97a9bba47a346213881c2b70f0 3 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=84a91675e465405dd389a95c7a141bc41f564b97a9bba47a346213881c2b70f0 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.rtT 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.rtT 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.rtT 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=8f8942210236f9fc9276327119a270bd5d9eaa9e0a780205 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.XTd 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 8f8942210236f9fc9276327119a270bd5d9eaa9e0a780205 0 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 8f8942210236f9fc9276327119a270bd5d9eaa9e0a780205 0 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=8f8942210236f9fc9276327119a270bd5d9eaa9e0a780205 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:33:50.626 05:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.XTd 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.XTd 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.XTd 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=489ac5513590dba7c02c24b8ee519b28dcf3e1ea23e5ef92 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.XPc 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 489ac5513590dba7c02c24b8ee519b28dcf3e1ea23e5ef92 2 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 489ac5513590dba7c02c24b8ee519b28dcf3e1ea23e5ef92 2 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=489ac5513590dba7c02c24b8ee519b28dcf3e1ea23e5ef92 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.XPc 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.XPc 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.XPc 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=cf06e22cad17a0e5103bce42a7dd300a 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.9wF 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key cf06e22cad17a0e5103bce42a7dd300a 1 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 cf06e22cad17a0e5103bce42a7dd300a 1 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=cf06e22cad17a0e5103bce42a7dd300a 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.9wF 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.9wF 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.9wF 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=e3a62c03b1b95e27389d0d78e7862dbe 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.RII 00:33:50.626 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key e3a62c03b1b95e27389d0d78e7862dbe 1 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 e3a62c03b1b95e27389d0d78e7862dbe 1 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=e3a62c03b1b95e27389d0d78e7862dbe 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.RII 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.RII 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.RII 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=1239a86d7e59a8e872e37e21a659b0253aa438fa7d0ef630 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.cGj 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 1239a86d7e59a8e872e37e21a659b0253aa438fa7d0ef630 2 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 1239a86d7e59a8e872e37e21a659b0253aa438fa7d0ef630 2 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=1239a86d7e59a8e872e37e21a659b0253aa438fa7d0ef630 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:33:50.627 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.cGj 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.cGj 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.cGj 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a9e23d9ce20412dd5ce2eaf96f3e9e90 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.TOR 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a9e23d9ce20412dd5ce2eaf96f3e9e90 0 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a9e23d9ce20412dd5ce2eaf96f3e9e90 0 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a9e23d9ce20412dd5ce2eaf96f3e9e90 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.TOR 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.TOR 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.TOR 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=c2d1d6e5833f52acbbbfe30ba3b5831cbd64600e6267067ab3b4d5a07977fc80 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.DI5 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key c2d1d6e5833f52acbbbfe30ba3b5831cbd64600e6267067ab3b4d5a07977fc80 3 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 c2d1d6e5833f52acbbbfe30ba3b5831cbd64600e6267067ab3b4d5a07977fc80 3 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=c2d1d6e5833f52acbbbfe30ba3b5831cbd64600e6267067ab3b4d5a07977fc80 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.DI5 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.DI5 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.DI5 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2460610 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2460610 ']' 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:50.886 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.93k 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.rtT ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rtT 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.XTd 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.XPc ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XPc 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.9wF 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.RII ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RII 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.cGj 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.TOR ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.TOR 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.DI5 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:51.145 05:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:52.520 Waiting for block devices as requested 00:33:52.520 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:52.520 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:52.520 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:52.779 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:52.779 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:52.779 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:52.779 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:53.038 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:53.038 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:53.038 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:53.038 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:53.296 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:53.296 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:53.296 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:53.296 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:53.553 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:53.554 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:54.121 No valid GPT data, bailing 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:54.121 00:33:54.121 Discovery Log Number of Records 2, Generation counter 2 00:33:54.121 =====Discovery Log Entry 0====== 00:33:54.121 trtype: tcp 00:33:54.121 adrfam: ipv4 00:33:54.121 subtype: current discovery subsystem 00:33:54.121 treq: not specified, sq flow control disable supported 00:33:54.121 portid: 1 00:33:54.121 trsvcid: 4420 00:33:54.121 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:54.121 traddr: 10.0.0.1 00:33:54.121 eflags: none 00:33:54.121 sectype: none 00:33:54.121 =====Discovery Log Entry 1====== 00:33:54.121 trtype: tcp 00:33:54.121 adrfam: ipv4 00:33:54.121 subtype: nvme subsystem 00:33:54.121 treq: not specified, sq flow control disable supported 00:33:54.121 portid: 1 00:33:54.121 trsvcid: 4420 00:33:54.121 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:54.121 traddr: 10.0.0.1 00:33:54.121 eflags: none 00:33:54.121 sectype: none 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:54.121 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.122 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.381 nvme0n1 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: ]] 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.381 05:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.640 nvme0n1 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:54.640 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:54.641 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:54.641 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.641 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.641 nvme0n1 00:33:54.641 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.900 nvme0n1 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.900 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.159 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: ]] 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.160 nvme0n1 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.160 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.419 nvme0n1 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: ]] 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:55.419 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.420 05:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.678 nvme0n1 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:55.678 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:55.679 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:55.679 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.679 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.679 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:55.679 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.679 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:55.679 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:55.679 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:55.679 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:55.679 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.679 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.937 nvme0n1 00:33:55.937 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.937 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.937 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.937 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.937 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.937 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.937 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.937 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.937 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.937 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.937 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.937 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.938 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.197 nvme0n1 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: ]] 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.197 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.456 nvme0n1 00:33:56.456 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.456 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.457 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.457 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.457 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.457 05:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.457 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.716 nvme0n1 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:33:56.716 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: ]] 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.717 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.284 nvme0n1 00:33:57.284 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.284 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.284 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.284 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.284 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.284 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.284 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.284 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.284 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.284 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.284 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.285 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.543 nvme0n1 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.543 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:57.544 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:33:57.544 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:33:57.544 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:33:57.544 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:57.544 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.544 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:57.544 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:57.544 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:57.544 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.544 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:57.544 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.544 05:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.544 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.544 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.544 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:57.544 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:57.544 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:57.544 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.544 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.544 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:57.544 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.544 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:57.544 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:57.544 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:57.544 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:57.544 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.544 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.802 nvme0n1 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: ]] 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.802 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.370 nvme0n1 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:58.370 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.371 05:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.630 nvme0n1 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: ]] 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.630 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.197 nvme0n1 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.197 05:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.763 nvme0n1 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.763 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.331 nvme0n1 00:34:00.331 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.331 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.331 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.331 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.331 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.331 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.331 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.331 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.331 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.331 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.331 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.331 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.331 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:00.331 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.331 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: ]] 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.332 05:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.898 nvme0n1 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.898 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.899 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.899 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.899 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:00.899 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:00.899 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:00.899 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.899 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.899 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:00.899 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.899 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:00.899 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:00.899 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:00.899 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:00.899 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.899 05:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.465 nvme0n1 00:34:01.465 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.465 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.465 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.465 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.465 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.465 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.465 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.465 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.465 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.465 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: ]] 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.724 05:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.656 nvme0n1 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.656 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:02.657 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.657 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:02.657 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:02.657 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:02.657 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:02.657 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.657 05:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.589 nvme0n1 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:03.589 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:03.590 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:03.590 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.590 05:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.962 nvme0n1 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: ]] 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.962 05:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.529 nvme0n1 00:34:05.529 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.529 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.529 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.529 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.529 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.529 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.787 05:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.723 nvme0n1 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: ]] 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.723 nvme0n1 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.723 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.982 nvme0n1 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.982 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.241 nvme0n1 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: ]] 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.241 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.500 nvme0n1 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.500 05:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.500 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.759 nvme0n1 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: ]] 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.759 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.018 nvme0n1 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.018 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.019 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:08.019 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.019 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:08.019 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:08.019 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:08.019 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:08.019 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.019 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.277 nvme0n1 00:34:08.277 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.277 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.277 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.277 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.277 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.278 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.537 nvme0n1 00:34:08.537 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.537 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.537 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.537 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.537 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.537 05:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: ]] 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.537 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.538 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.538 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:08.538 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:08.538 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:08.538 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.538 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.538 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:08.538 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.538 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:08.538 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:08.538 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:08.538 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:08.538 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.538 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.796 nvme0n1 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.796 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.055 nvme0n1 00:34:09.055 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.055 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.055 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.055 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.055 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.055 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.055 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.055 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.055 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.055 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.055 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.055 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: ]] 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.056 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.313 nvme0n1 00:34:09.314 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.314 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.314 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.314 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.314 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.314 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.572 05:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.830 nvme0n1 00:34:09.830 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.830 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.830 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.830 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.830 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.830 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.830 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.830 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.830 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.830 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.830 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.830 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.830 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:09.830 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.830 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:09.830 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.831 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.090 nvme0n1 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: ]] 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.090 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.349 nvme0n1 00:34:10.349 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.608 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.608 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.608 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.608 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.608 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.608 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.608 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.608 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.608 05:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.608 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.867 nvme0n1 00:34:10.867 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.867 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.867 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.867 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.867 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.867 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.867 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.867 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.867 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.867 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: ]] 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.868 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.435 nvme0n1 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.435 05:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.002 nvme0n1 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.002 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.260 05:10:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.828 nvme0n1 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: ]] 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.828 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.395 nvme0n1 00:34:13.395 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.395 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.396 05:10:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.963 nvme0n1 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: ]] 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.963 05:10:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.898 nvme0n1 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.898 05:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.833 nvme0n1 00:34:15.833 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.833 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.833 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.833 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.833 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.833 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.124 05:10:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.177 nvme0n1 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: ]] 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.177 05:10:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.112 nvme0n1 00:34:18.112 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.112 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.112 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.112 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.112 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.112 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.113 05:10:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.048 nvme0n1 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: ]] 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.048 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.305 nvme0n1 00:34:19.305 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.305 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.305 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.305 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.305 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.305 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.305 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.305 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.305 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.305 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.305 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.306 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.563 nvme0n1 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.563 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.564 05:10:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.564 nvme0n1 00:34:19.564 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.564 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.564 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.564 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.564 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.564 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: ]] 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.822 nvme0n1 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.822 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.080 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.080 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.080 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:20.080 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:20.080 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:20.080 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.080 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.081 nvme0n1 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: ]] 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.081 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.339 nvme0n1 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.339 05:10:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.597 nvme0n1 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:20.597 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.598 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.855 nvme0n1 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:20.855 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: ]] 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.856 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.113 nvme0n1 00:34:21.113 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.113 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.113 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.113 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.113 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.113 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.113 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.113 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.113 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.113 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.114 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.381 nvme0n1 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: ]] 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.381 05:10:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.638 nvme0n1 00:34:21.638 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.638 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.638 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.638 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.638 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.638 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.895 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.154 nvme0n1 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.154 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.412 nvme0n1 00:34:22.412 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.412 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.412 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.412 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.412 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.412 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.412 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.412 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.412 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.412 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.412 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.412 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.412 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:22.412 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.412 05:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: ]] 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.413 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.672 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.672 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.672 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:22.672 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:22.672 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:22.672 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.672 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.672 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:22.672 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.672 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:22.672 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:22.672 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:22.672 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:22.672 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.672 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.930 nvme0n1 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.930 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.188 nvme0n1 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: ]] 00:34:23.188 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.189 05:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.753 nvme0n1 00:34:23.753 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.753 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.753 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.753 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.753 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.753 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.753 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.753 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.753 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.753 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:24.013 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.014 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.580 nvme0n1 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.580 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.581 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:24.581 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:24.581 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:24.581 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.581 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.581 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:24.581 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.581 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:24.581 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:24.581 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:24.581 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:24.581 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.581 05:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.148 nvme0n1 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: ]] 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:25.148 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:25.149 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:25.149 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.149 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.149 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:25.149 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.149 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:25.149 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:25.149 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:25.149 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:25.149 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.149 05:10:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.716 nvme0n1 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.716 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.717 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.284 nvme0n1 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIwNTQ5M2ZmYWM1Y2M2YmFlNjE1Y2NmNGNiY2Y3OTaJRKtJ: 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: ]] 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRhOTE2NzVlNDY1NDA1ZGQzODlhOTVjN2ExNDFiYzQxZjU2NGI5N2E5YmJhNDdhMzQ2MjEzODgxYzJiNzBmMEZ6uBM=: 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.284 05:10:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.219 nvme0n1 00:34:27.219 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.219 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.219 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.219 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.219 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.219 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:27.478 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.479 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.479 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:27.479 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.479 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:27.479 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:27.479 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:27.479 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:27.479 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.479 05:10:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.414 nvme0n1 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:28.414 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.415 05:10:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.349 nvme0n1 00:34:29.349 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.349 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.349 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.349 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.349 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.349 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.349 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.349 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIzOWE4NmQ3ZTU5YThlODcyZTM3ZTIxYTY1OWIwMjUzYWE0MzhmYTdkMGVmNjMwFxmd8w==: 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: ]] 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTllMjNkOWNlMjA0MTJkZDVjZTJlYWY5NmYzZTllOTBoca80: 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.350 05:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.283 nvme0n1 00:34:30.283 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.283 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.283 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.283 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.283 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.283 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.283 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.283 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJkMWQ2ZTU4MzNmNTJhY2JiYmZlMzBiYTNiNTgzMWNiZDY0NjAwZTYyNjcwNjdhYjNiNGQ1YTA3OTc3ZmM4MIiZ6h8=: 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.284 05:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.659 nvme0n1 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:31.659 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.660 request: 00:34:31.660 { 00:34:31.660 "name": "nvme0", 00:34:31.660 "trtype": "tcp", 00:34:31.660 "traddr": "10.0.0.1", 00:34:31.660 "adrfam": "ipv4", 00:34:31.660 "trsvcid": "4420", 00:34:31.660 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:31.660 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:31.660 "prchk_reftag": false, 00:34:31.660 "prchk_guard": false, 00:34:31.660 "hdgst": false, 00:34:31.660 "ddgst": false, 00:34:31.660 "allow_unrecognized_csi": false, 00:34:31.660 "method": "bdev_nvme_attach_controller", 00:34:31.660 "req_id": 1 00:34:31.660 } 00:34:31.660 Got JSON-RPC error response 00:34:31.660 response: 00:34:31.660 { 00:34:31.660 "code": -5, 00:34:31.660 "message": "Input/output error" 00:34:31.660 } 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.660 05:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.660 request: 00:34:31.660 { 00:34:31.660 "name": "nvme0", 00:34:31.660 "trtype": "tcp", 00:34:31.660 "traddr": "10.0.0.1", 00:34:31.660 "adrfam": "ipv4", 00:34:31.660 "trsvcid": "4420", 00:34:31.660 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:31.660 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:31.660 "prchk_reftag": false, 00:34:31.660 "prchk_guard": false, 00:34:31.660 "hdgst": false, 00:34:31.660 "ddgst": false, 00:34:31.660 "dhchap_key": "key2", 00:34:31.660 "allow_unrecognized_csi": false, 00:34:31.660 "method": "bdev_nvme_attach_controller", 00:34:31.660 "req_id": 1 00:34:31.660 } 00:34:31.660 Got JSON-RPC error response 00:34:31.660 response: 00:34:31.660 { 00:34:31.660 "code": -5, 00:34:31.660 "message": "Input/output error" 00:34:31.660 } 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.660 request: 00:34:31.660 { 00:34:31.660 "name": "nvme0", 00:34:31.660 "trtype": "tcp", 00:34:31.660 "traddr": "10.0.0.1", 00:34:31.660 "adrfam": "ipv4", 00:34:31.660 "trsvcid": "4420", 00:34:31.660 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:31.660 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:31.660 "prchk_reftag": false, 00:34:31.660 "prchk_guard": false, 00:34:31.660 "hdgst": false, 00:34:31.660 "ddgst": false, 00:34:31.660 "dhchap_key": "key1", 00:34:31.660 "dhchap_ctrlr_key": "ckey2", 00:34:31.660 "allow_unrecognized_csi": false, 00:34:31.660 "method": "bdev_nvme_attach_controller", 00:34:31.660 "req_id": 1 00:34:31.660 } 00:34:31.660 Got JSON-RPC error response 00:34:31.660 response: 00:34:31.660 { 00:34:31.660 "code": -5, 00:34:31.660 "message": "Input/output error" 00:34:31.660 } 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:31.660 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.661 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.919 nvme0n1 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.919 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.920 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.920 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:31.920 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:31.920 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:31.920 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:31.920 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:31.920 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:31.920 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:31.920 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:31.920 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.920 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.178 request: 00:34:32.178 { 00:34:32.178 "name": "nvme0", 00:34:32.178 "dhchap_key": "key1", 00:34:32.178 "dhchap_ctrlr_key": "ckey2", 00:34:32.178 "method": "bdev_nvme_set_keys", 00:34:32.178 "req_id": 1 00:34:32.178 } 00:34:32.178 Got JSON-RPC error response 00:34:32.178 response: 00:34:32.178 { 00:34:32.178 "code": -13, 00:34:32.178 "message": "Permission denied" 00:34:32.178 } 00:34:32.178 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:32.178 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:32.178 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:32.178 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:32.178 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:32.178 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.178 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.178 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.178 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:32.178 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.178 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:32.178 05:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:33.112 05:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.112 05:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:33.112 05:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.112 05:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.112 05:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.112 05:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:33.112 05:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGY4OTQyMjEwMjM2ZjlmYzkyNzYzMjcxMTlhMjcwYmQ1ZDllYWE5ZTBhNzgwMjA1E/asXg==: 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: ]] 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDg5YWM1NTEzNTkwZGJhN2MwMmMyNGI4ZWU1MTliMjhkY2YzZTFlYTIzZTVlZjkyA5+3eA==: 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.489 nvme0n1 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwNmUyMmNhZDE3YTBlNTEwM2JjZTQyYTdkZDMwMGFHemt4: 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: ]] 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTNhNjJjMDNiMWI5NWUyNzM4OWQwZDc4ZTc4NjJkYmUSQGic: 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.489 request: 00:34:34.489 { 00:34:34.489 "name": "nvme0", 00:34:34.489 "dhchap_key": "key2", 00:34:34.489 "dhchap_ctrlr_key": "ckey1", 00:34:34.489 "method": "bdev_nvme_set_keys", 00:34:34.489 "req_id": 1 00:34:34.489 } 00:34:34.489 Got JSON-RPC error response 00:34:34.489 response: 00:34:34.489 { 00:34:34.489 "code": -13, 00:34:34.489 "message": "Permission denied" 00:34:34.489 } 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:34.489 05:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:35.423 05:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.423 05:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:35.423 05:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.423 05:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.423 05:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.423 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:35.423 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:35.423 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:35.423 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:35.423 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:35.423 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:35.423 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:35.423 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:35.423 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:35.423 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:35.682 rmmod nvme_tcp 00:34:35.682 rmmod nvme_fabrics 00:34:35.682 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:35.682 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:35.682 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:35.682 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 2460610 ']' 00:34:35.682 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 2460610 00:34:35.682 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2460610 ']' 00:34:35.682 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2460610 00:34:35.682 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:34:35.682 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:35.682 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2460610 00:34:35.682 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:35.682 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:35.682 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2460610' 00:34:35.682 killing process with pid 2460610 00:34:35.682 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2460610 00:34:35.682 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2460610 00:34:35.941 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:35.941 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:35.941 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:35.941 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:35.941 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:34:35.941 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:35.941 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:34:35.941 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:35.941 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:35.941 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.941 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.941 05:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.841 05:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:37.841 05:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:37.841 05:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:37.841 05:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:37.841 05:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:37.841 05:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:34:37.841 05:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:37.841 05:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:37.841 05:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:37.841 05:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:37.841 05:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:34:37.841 05:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:34:37.841 05:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:39.218 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:39.218 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:39.218 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:39.218 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:39.218 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:39.218 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:39.218 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:39.218 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:39.218 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:39.218 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:39.218 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:39.218 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:39.218 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:39.218 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:39.218 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:39.218 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:40.232 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:40.232 05:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.93k /tmp/spdk.key-null.XTd /tmp/spdk.key-sha256.9wF /tmp/spdk.key-sha384.cGj /tmp/spdk.key-sha512.DI5 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:40.232 05:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:41.608 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:41.608 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:41.608 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:41.608 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:41.608 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:41.608 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:41.608 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:41.608 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:41.608 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:41.608 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:41.608 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:41.608 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:41.608 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:41.608 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:41.608 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:41.608 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:41.608 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:41.608 00:34:41.608 real 0m54.513s 00:34:41.608 user 0m52.187s 00:34:41.608 sys 0m6.046s 00:34:41.608 05:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:41.608 05:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.608 ************************************ 00:34:41.608 END TEST nvmf_auth_host 00:34:41.608 ************************************ 00:34:41.608 05:10:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:41.608 05:10:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:41.608 05:10:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:41.608 05:10:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:41.608 05:10:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.608 ************************************ 00:34:41.608 START TEST nvmf_digest 00:34:41.608 ************************************ 00:34:41.608 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:41.608 * Looking for test storage... 00:34:41.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:41.608 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:34:41.608 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1689 -- # lcov --version 00:34:41.608 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:34:41.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.868 --rc genhtml_branch_coverage=1 00:34:41.868 --rc genhtml_function_coverage=1 00:34:41.868 --rc genhtml_legend=1 00:34:41.868 --rc geninfo_all_blocks=1 00:34:41.868 --rc geninfo_unexecuted_blocks=1 00:34:41.868 00:34:41.868 ' 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:34:41.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.868 --rc genhtml_branch_coverage=1 00:34:41.868 --rc genhtml_function_coverage=1 00:34:41.868 --rc genhtml_legend=1 00:34:41.868 --rc geninfo_all_blocks=1 00:34:41.868 --rc geninfo_unexecuted_blocks=1 00:34:41.868 00:34:41.868 ' 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:34:41.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.868 --rc genhtml_branch_coverage=1 00:34:41.868 --rc genhtml_function_coverage=1 00:34:41.868 --rc genhtml_legend=1 00:34:41.868 --rc geninfo_all_blocks=1 00:34:41.868 --rc geninfo_unexecuted_blocks=1 00:34:41.868 00:34:41.868 ' 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:34:41.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.868 --rc genhtml_branch_coverage=1 00:34:41.868 --rc genhtml_function_coverage=1 00:34:41.868 --rc genhtml_legend=1 00:34:41.868 --rc geninfo_all_blocks=1 00:34:41.868 --rc geninfo_unexecuted_blocks=1 00:34:41.868 00:34:41.868 ' 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:41.868 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:41.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:41.869 05:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:43.770 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:43.771 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:43.771 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:43.771 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:43.771 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:43.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:43.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:34:43.771 00:34:43.771 --- 10.0.0.2 ping statistics --- 00:34:43.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.771 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:34:43.771 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:44.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:44.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:34:44.029 00:34:44.029 --- 10.0.0.1 ping statistics --- 00:34:44.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.029 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:34:44.029 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:44.029 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:34:44.029 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:44.029 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:44.029 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:44.029 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:44.029 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:44.029 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:44.029 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:44.029 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:44.029 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:44.029 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:44.029 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:44.030 ************************************ 00:34:44.030 START TEST nvmf_digest_clean 00:34:44.030 ************************************ 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=2470459 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 2470459 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2470459 ']' 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:44.030 05:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:44.030 [2024-10-28 05:10:34.462659] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:34:44.030 [2024-10-28 05:10:34.462745] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:44.030 [2024-10-28 05:10:34.601740] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:44.288 [2024-10-28 05:10:34.637312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.288 [2024-10-28 05:10:34.685607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:44.288 [2024-10-28 05:10:34.685679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:44.288 [2024-10-28 05:10:34.685696] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:44.288 [2024-10-28 05:10:34.685709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:44.288 [2024-10-28 05:10:34.685720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:44.288 [2024-10-28 05:10:34.686382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.223 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:45.223 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:45.223 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:45.223 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:45.223 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:45.223 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:45.223 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:45.223 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:45.223 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:45.223 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.223 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:45.223 null0 00:34:45.224 [2024-10-28 05:10:35.605525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:45.224 [2024-10-28 05:10:35.629719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2470606 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2470606 /var/tmp/bperf.sock 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2470606 ']' 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:45.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:45.224 05:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:45.224 [2024-10-28 05:10:35.683697] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:34:45.224 [2024-10-28 05:10:35.683772] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470606 ] 00:34:45.483 [2024-10-28 05:10:35.821811] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:45.483 [2024-10-28 05:10:35.862313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.483 [2024-10-28 05:10:35.913302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.418 05:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:46.418 05:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:46.418 05:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:46.418 05:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:46.418 05:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:46.675 05:10:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:46.675 05:10:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:46.934 nvme0n1 00:34:46.934 05:10:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:46.934 05:10:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:47.192 Running I/O for 2 seconds... 00:34:49.059 17920.00 IOPS, 70.00 MiB/s [2024-10-28T04:10:39.655Z] 18405.50 IOPS, 71.90 MiB/s 00:34:49.059 Latency(us) 00:34:49.059 [2024-10-28T04:10:39.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.059 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:49.059 nvme0n1 : 2.01 18400.40 71.88 0.00 0.00 6944.54 3674.01 21995.38 00:34:49.059 [2024-10-28T04:10:39.655Z] =================================================================================================================== 00:34:49.059 [2024-10-28T04:10:39.655Z] Total : 18400.40 71.88 0.00 0.00 6944.54 3674.01 21995.38 00:34:49.059 { 00:34:49.059 "results": [ 00:34:49.059 { 00:34:49.059 "job": "nvme0n1", 00:34:49.059 "core_mask": "0x2", 00:34:49.059 "workload": "randread", 00:34:49.059 "status": "finished", 00:34:49.059 "queue_depth": 128, 00:34:49.059 "io_size": 4096, 00:34:49.059 "runtime": 2.005011, 00:34:49.059 "iops": 18400.397803303822, 00:34:49.059 "mibps": 71.87655391915555, 00:34:49.059 "io_failed": 0, 00:34:49.059 "io_timeout": 0, 00:34:49.059 "avg_latency_us": 6944.538511566452, 00:34:49.059 "min_latency_us": 3674.0063114906256, 00:34:49.059 "max_latency_us": 21995.375533692222 00:34:49.059 } 00:34:49.059 ], 00:34:49.059 "core_count": 1 00:34:49.059 } 00:34:49.059 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:49.059 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:49.059 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:49.059 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:49.059 | select(.opcode=="crc32c") 00:34:49.059 | "\(.module_name) \(.executed)"' 00:34:49.059 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:49.316 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:49.316 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:49.316 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:49.316 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:49.316 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2470606 00:34:49.316 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2470606 ']' 00:34:49.316 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2470606 00:34:49.316 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:49.316 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:49.316 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2470606 00:34:49.316 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:49.316 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:49.316 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2470606' 00:34:49.316 killing process with pid 2470606 00:34:49.316 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2470606 00:34:49.316 Received shutdown signal, test time was about 2.000000 seconds 00:34:49.316 00:34:49.316 Latency(us) 00:34:49.316 [2024-10-28T04:10:39.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.316 [2024-10-28T04:10:39.912Z] =================================================================================================================== 00:34:49.316 [2024-10-28T04:10:39.912Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:49.316 05:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2470606 00:34:49.573 05:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:49.573 05:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:49.573 05:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:49.573 05:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:49.573 05:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:49.573 05:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:49.573 05:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:49.573 05:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2471127 00:34:49.573 05:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:49.573 05:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2471127 /var/tmp/bperf.sock 00:34:49.573 05:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2471127 ']' 00:34:49.573 05:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:49.573 05:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:49.573 05:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:49.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:49.573 05:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:49.573 05:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:49.573 [2024-10-28 05:10:40.149578] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:34:49.573 [2024-10-28 05:10:40.149704] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471127 ] 00:34:49.573 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:49.573 Zero copy mechanism will not be used. 00:34:49.830 [2024-10-28 05:10:40.283398] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:49.830 [2024-10-28 05:10:40.321368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.830 [2024-10-28 05:10:40.369107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:50.763 05:10:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:50.763 05:10:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:50.763 05:10:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:50.763 05:10:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:50.763 05:10:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:51.021 05:10:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:51.021 05:10:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:51.587 nvme0n1 00:34:51.587 05:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:51.587 05:10:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:51.587 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:51.587 Zero copy mechanism will not be used. 00:34:51.587 Running I/O for 2 seconds... 00:34:53.894 4019.00 IOPS, 502.38 MiB/s [2024-10-28T04:10:44.490Z] 4033.50 IOPS, 504.19 MiB/s 00:34:53.894 Latency(us) 00:34:53.894 [2024-10-28T04:10:44.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.894 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:53.894 nvme0n1 : 2.00 4036.85 504.61 0.00 0.00 3959.12 1192.23 10024.44 00:34:53.894 [2024-10-28T04:10:44.490Z] =================================================================================================================== 00:34:53.894 [2024-10-28T04:10:44.490Z] Total : 4036.85 504.61 0.00 0.00 3959.12 1192.23 10024.44 00:34:53.894 { 00:34:53.894 "results": [ 00:34:53.894 { 00:34:53.894 "job": "nvme0n1", 00:34:53.894 "core_mask": "0x2", 00:34:53.894 "workload": "randread", 00:34:53.894 "status": "finished", 00:34:53.894 "queue_depth": 16, 00:34:53.894 "io_size": 131072, 00:34:53.894 "runtime": 2.002303, 00:34:53.894 "iops": 4036.851565422416, 00:34:53.894 "mibps": 504.606445677802, 00:34:53.894 "io_failed": 0, 00:34:53.894 "io_timeout": 0, 00:34:53.894 "avg_latency_us": 3959.1182745031074, 00:34:53.894 "min_latency_us": 1192.2272136625209, 00:34:53.894 "max_latency_us": 10024.441061815482 00:34:53.894 } 00:34:53.894 ], 00:34:53.894 "core_count": 1 00:34:53.894 } 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:53.894 | select(.opcode=="crc32c") 00:34:53.894 | "\(.module_name) \(.executed)"' 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2471127 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2471127 ']' 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2471127 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2471127 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2471127' 00:34:53.894 killing process with pid 2471127 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2471127 00:34:53.894 Received shutdown signal, test time was about 2.000000 seconds 00:34:53.894 00:34:53.894 Latency(us) 00:34:53.894 [2024-10-28T04:10:44.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.894 [2024-10-28T04:10:44.490Z] =================================================================================================================== 00:34:53.894 [2024-10-28T04:10:44.490Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:53.894 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2471127 00:34:54.153 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:54.153 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:54.153 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:54.153 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:54.153 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:54.153 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:54.153 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:54.153 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2471652 00:34:54.153 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:54.153 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2471652 /var/tmp/bperf.sock 00:34:54.153 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2471652 ']' 00:34:54.153 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:54.153 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:54.153 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:54.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:54.153 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:54.153 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:54.153 [2024-10-28 05:10:44.697112] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:34:54.153 [2024-10-28 05:10:44.697209] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471652 ] 00:34:54.411 [2024-10-28 05:10:44.828871] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:54.411 [2024-10-28 05:10:44.864720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.411 [2024-10-28 05:10:44.910579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.411 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:54.411 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:54.411 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:54.411 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:54.411 05:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:54.978 05:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:54.978 05:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:55.236 nvme0n1 00:34:55.236 05:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:55.236 05:10:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:55.236 Running I/O for 2 seconds... 00:34:57.542 19080.00 IOPS, 74.53 MiB/s [2024-10-28T04:10:48.138Z] 19221.50 IOPS, 75.08 MiB/s 00:34:57.542 Latency(us) 00:34:57.542 [2024-10-28T04:10:48.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.542 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:57.542 nvme0n1 : 2.01 19240.50 75.16 0.00 0.00 6641.36 3576.68 18199.71 00:34:57.542 [2024-10-28T04:10:48.138Z] =================================================================================================================== 00:34:57.542 [2024-10-28T04:10:48.138Z] Total : 19240.50 75.16 0.00 0.00 6641.36 3576.68 18199.71 00:34:57.542 { 00:34:57.542 "results": [ 00:34:57.542 { 00:34:57.542 "job": "nvme0n1", 00:34:57.542 "core_mask": "0x2", 00:34:57.542 "workload": "randwrite", 00:34:57.542 "status": "finished", 00:34:57.542 "queue_depth": 128, 00:34:57.542 "io_size": 4096, 00:34:57.542 "runtime": 2.00712, 00:34:57.542 "iops": 19240.50380644904, 00:34:57.542 "mibps": 75.15821799394156, 00:34:57.542 "io_failed": 0, 00:34:57.542 "io_timeout": 0, 00:34:57.542 "avg_latency_us": 6641.3559527527195, 00:34:57.542 "min_latency_us": 3576.681640987563, 00:34:57.542 "max_latency_us": 18199.71338407277 00:34:57.542 } 00:34:57.542 ], 00:34:57.542 "core_count": 1 00:34:57.542 } 00:34:57.542 05:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:57.542 05:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:57.542 05:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:57.542 05:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:57.542 | select(.opcode=="crc32c") 00:34:57.542 | "\(.module_name) \(.executed)"' 00:34:57.542 05:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:57.542 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:57.542 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:57.542 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2471652 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2471652 ']' 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2471652 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2471652 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2471652' 00:34:57.800 killing process with pid 2471652 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2471652 00:34:57.800 Received shutdown signal, test time was about 2.000000 seconds 00:34:57.800 00:34:57.800 Latency(us) 00:34:57.800 [2024-10-28T04:10:48.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.800 [2024-10-28T04:10:48.396Z] =================================================================================================================== 00:34:57.800 [2024-10-28T04:10:48.396Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2471652 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2472043 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2472043 /var/tmp/bperf.sock 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2472043 ']' 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:57.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:57.800 05:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:58.058 [2024-10-28 05:10:48.428491] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:34:58.058 [2024-10-28 05:10:48.428586] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472043 ] 00:34:58.058 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:58.058 Zero copy mechanism will not be used. 00:34:58.058 [2024-10-28 05:10:48.559556] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:58.058 [2024-10-28 05:10:48.600341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.058 [2024-10-28 05:10:48.648599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:58.989 05:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:58.989 05:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:58.989 05:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:58.989 05:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:58.989 05:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:59.247 05:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:59.247 05:10:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:59.828 nvme0n1 00:34:59.828 05:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:59.828 05:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:59.828 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:59.828 Zero copy mechanism will not be used. 00:34:59.828 Running I/O for 2 seconds... 00:35:01.692 2886.00 IOPS, 360.75 MiB/s [2024-10-28T04:10:52.289Z] 2986.00 IOPS, 373.25 MiB/s 00:35:01.693 Latency(us) 00:35:01.693 [2024-10-28T04:10:52.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.693 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:01.693 nvme0n1 : 2.01 2985.62 373.20 0.00 0.00 5346.18 2822.42 8856.55 00:35:01.693 [2024-10-28T04:10:52.289Z] =================================================================================================================== 00:35:01.693 [2024-10-28T04:10:52.289Z] Total : 2985.62 373.20 0.00 0.00 5346.18 2822.42 8856.55 00:35:01.693 { 00:35:01.693 "results": [ 00:35:01.693 { 00:35:01.693 "job": "nvme0n1", 00:35:01.693 "core_mask": "0x2", 00:35:01.693 "workload": "randwrite", 00:35:01.693 "status": "finished", 00:35:01.693 "queue_depth": 16, 00:35:01.693 "io_size": 131072, 00:35:01.693 "runtime": 2.005614, 00:35:01.693 "iops": 2985.6193664384073, 00:35:01.693 "mibps": 373.2024208048009, 00:35:01.693 "io_failed": 0, 00:35:01.693 "io_timeout": 0, 00:35:01.693 "avg_latency_us": 5346.183888350924, 00:35:01.693 "min_latency_us": 2822.415444588825, 00:35:01.693 "max_latency_us": 8856.545015778727 00:35:01.693 } 00:35:01.693 ], 00:35:01.693 "core_count": 1 00:35:01.693 } 00:35:01.951 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:01.951 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:01.951 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:01.951 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:01.951 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:01.951 | select(.opcode=="crc32c") 00:35:01.951 | "\(.module_name) \(.executed)"' 00:35:02.211 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:02.211 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:02.211 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:02.211 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:02.211 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2472043 00:35:02.211 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2472043 ']' 00:35:02.211 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2472043 00:35:02.211 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:02.211 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:02.211 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2472043 00:35:02.211 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:02.211 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:02.211 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2472043' 00:35:02.211 killing process with pid 2472043 00:35:02.211 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2472043 00:35:02.211 Received shutdown signal, test time was about 2.000000 seconds 00:35:02.211 00:35:02.211 Latency(us) 00:35:02.211 [2024-10-28T04:10:52.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.211 [2024-10-28T04:10:52.807Z] =================================================================================================================== 00:35:02.211 [2024-10-28T04:10:52.807Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:02.211 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2472043 00:35:02.470 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2470459 00:35:02.471 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2470459 ']' 00:35:02.471 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2470459 00:35:02.471 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:02.471 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:02.471 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2470459 00:35:02.471 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:02.471 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:02.471 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2470459' 00:35:02.471 killing process with pid 2470459 00:35:02.471 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2470459 00:35:02.471 05:10:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2470459 00:35:02.729 00:35:02.729 real 0m18.673s 00:35:02.729 user 0m37.681s 00:35:02.729 sys 0m4.133s 00:35:02.729 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:02.729 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:02.729 ************************************ 00:35:02.729 END TEST nvmf_digest_clean 00:35:02.729 ************************************ 00:35:02.729 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:02.729 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:02.729 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:02.729 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:02.729 ************************************ 00:35:02.729 START TEST nvmf_digest_error 00:35:02.729 ************************************ 00:35:02.729 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:35:02.729 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:02.729 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:02.729 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:02.729 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:02.729 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=2472600 00:35:02.730 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:02.730 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 2472600 00:35:02.730 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2472600 ']' 00:35:02.730 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.730 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:02.730 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.730 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:02.730 05:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:02.730 [2024-10-28 05:10:53.192238] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:35:02.730 [2024-10-28 05:10:53.192316] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:02.988 [2024-10-28 05:10:53.329805] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:02.988 [2024-10-28 05:10:53.366885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.988 [2024-10-28 05:10:53.412807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:02.988 [2024-10-28 05:10:53.412859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:02.988 [2024-10-28 05:10:53.412874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:02.988 [2024-10-28 05:10:53.412886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:02.988 [2024-10-28 05:10:53.412896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:02.988 [2024-10-28 05:10:53.413652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:03.924 [2024-10-28 05:10:54.206191] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:03.924 null0 00:35:03.924 [2024-10-28 05:10:54.324964] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:03.924 [2024-10-28 05:10:54.349162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2472746 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2472746 /var/tmp/bperf.sock 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2472746 ']' 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:03.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:03.924 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:03.924 [2024-10-28 05:10:54.399154] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:35:03.924 [2024-10-28 05:10:54.399215] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472746 ] 00:35:04.183 [2024-10-28 05:10:54.530323] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:04.183 [2024-10-28 05:10:54.570935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.183 [2024-10-28 05:10:54.623240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.183 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:04.183 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:04.183 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:04.183 05:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:04.749 05:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:04.749 05:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.749 05:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:04.749 05:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.749 05:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:04.749 05:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:05.008 nvme0n1 00:35:05.008 05:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:05.008 05:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.008 05:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:05.008 05:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.008 05:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:05.008 05:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:05.267 Running I/O for 2 seconds... 00:35:05.267 [2024-10-28 05:10:55.665854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.267 [2024-10-28 05:10:55.665916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.267 [2024-10-28 05:10:55.665936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.267 [2024-10-28 05:10:55.684075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.267 [2024-10-28 05:10:55.684125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.267 [2024-10-28 05:10:55.684155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.267 [2024-10-28 05:10:55.699905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.267 [2024-10-28 05:10:55.699949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.267 [2024-10-28 05:10:55.699982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.267 [2024-10-28 05:10:55.713367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.267 [2024-10-28 05:10:55.713404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.267 [2024-10-28 05:10:55.713424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.267 [2024-10-28 05:10:55.728271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.267 [2024-10-28 05:10:55.728307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.267 [2024-10-28 05:10:55.728331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.267 [2024-10-28 05:10:55.741675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.267 [2024-10-28 05:10:55.741708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.267 [2024-10-28 05:10:55.741725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.267 [2024-10-28 05:10:55.754421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.267 [2024-10-28 05:10:55.754456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.267 [2024-10-28 05:10:55.754477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.267 [2024-10-28 05:10:55.769581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.267 [2024-10-28 05:10:55.769617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.267 [2024-10-28 05:10:55.769663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.267 [2024-10-28 05:10:55.785628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.267 [2024-10-28 05:10:55.785686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.267 [2024-10-28 05:10:55.785704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.267 [2024-10-28 05:10:55.802464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.267 [2024-10-28 05:10:55.802500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.267 [2024-10-28 05:10:55.802519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.267 [2024-10-28 05:10:55.818901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.267 [2024-10-28 05:10:55.818933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.267 [2024-10-28 05:10:55.818951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.267 [2024-10-28 05:10:55.834065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.267 [2024-10-28 05:10:55.834101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.267 [2024-10-28 05:10:55.834121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.267 [2024-10-28 05:10:55.846548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.267 [2024-10-28 05:10:55.846582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.267 [2024-10-28 05:10:55.846601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.526 [2024-10-28 05:10:55.861361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.526 [2024-10-28 05:10:55.861397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.526 [2024-10-28 05:10:55.861415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.526 [2024-10-28 05:10:55.873725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.526 [2024-10-28 05:10:55.873755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.526 [2024-10-28 05:10:55.873776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.526 [2024-10-28 05:10:55.888428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.526 [2024-10-28 05:10:55.888463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.526 [2024-10-28 05:10:55.888483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.526 [2024-10-28 05:10:55.905155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.526 [2024-10-28 05:10:55.905197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.526 [2024-10-28 05:10:55.905217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.526 [2024-10-28 05:10:55.916811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.526 [2024-10-28 05:10:55.916840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.526 [2024-10-28 05:10:55.916858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.526 [2024-10-28 05:10:55.932865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.526 [2024-10-28 05:10:55.932913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.526 [2024-10-28 05:10:55.932931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.526 [2024-10-28 05:10:55.945165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.526 [2024-10-28 05:10:55.945199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.526 [2024-10-28 05:10:55.945225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.526 [2024-10-28 05:10:55.959804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.526 [2024-10-28 05:10:55.959833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.526 [2024-10-28 05:10:55.959853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.526 [2024-10-28 05:10:55.974082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.526 [2024-10-28 05:10:55.974118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.526 [2024-10-28 05:10:55.974143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.526 [2024-10-28 05:10:55.988232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.526 [2024-10-28 05:10:55.988267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.526 [2024-10-28 05:10:55.988285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.526 [2024-10-28 05:10:56.001397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.526 [2024-10-28 05:10:56.001433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.526 [2024-10-28 05:10:56.001451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.526 [2024-10-28 05:10:56.015025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.526 [2024-10-28 05:10:56.015061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.527 [2024-10-28 05:10:56.015081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.527 [2024-10-28 05:10:56.028706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.527 [2024-10-28 05:10:56.028736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.527 [2024-10-28 05:10:56.028753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.527 [2024-10-28 05:10:56.042364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.527 [2024-10-28 05:10:56.042399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.527 [2024-10-28 05:10:56.042427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.527 [2024-10-28 05:10:56.057471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.527 [2024-10-28 05:10:56.057507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.527 [2024-10-28 05:10:56.057526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.527 [2024-10-28 05:10:56.070655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.527 [2024-10-28 05:10:56.070700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.527 [2024-10-28 05:10:56.070719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.527 [2024-10-28 05:10:56.087356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.527 [2024-10-28 05:10:56.087391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.527 [2024-10-28 05:10:56.087410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.527 [2024-10-28 05:10:56.103077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.527 [2024-10-28 05:10:56.103112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.527 [2024-10-28 05:10:56.103132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.527 [2024-10-28 05:10:56.114862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.527 [2024-10-28 05:10:56.114890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.527 [2024-10-28 05:10:56.114907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.786 [2024-10-28 05:10:56.128947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.786 [2024-10-28 05:10:56.128982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.786 [2024-10-28 05:10:56.129002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.786 [2024-10-28 05:10:56.144698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.786 [2024-10-28 05:10:56.144735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.786 [2024-10-28 05:10:56.144752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.786 [2024-10-28 05:10:56.156925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.786 [2024-10-28 05:10:56.156975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.786 [2024-10-28 05:10:56.157000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.786 [2024-10-28 05:10:56.172825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.786 [2024-10-28 05:10:56.172854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.786 [2024-10-28 05:10:56.172872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.786 [2024-10-28 05:10:56.184907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.786 [2024-10-28 05:10:56.184937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.786 [2024-10-28 05:10:56.184968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.786 [2024-10-28 05:10:56.199509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.786 [2024-10-28 05:10:56.199544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.786 [2024-10-28 05:10:56.199564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.786 [2024-10-28 05:10:56.215495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.786 [2024-10-28 05:10:56.215530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.786 [2024-10-28 05:10:56.215549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.786 [2024-10-28 05:10:56.227921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.786 [2024-10-28 05:10:56.227968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.786 [2024-10-28 05:10:56.227994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.786 [2024-10-28 05:10:56.245939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.786 [2024-10-28 05:10:56.245973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.786 [2024-10-28 05:10:56.245992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.786 [2024-10-28 05:10:56.260705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.786 [2024-10-28 05:10:56.260733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.786 [2024-10-28 05:10:56.260749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.786 [2024-10-28 05:10:56.273177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.786 [2024-10-28 05:10:56.273212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.787 [2024-10-28 05:10:56.273231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.787 [2024-10-28 05:10:56.289367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.787 [2024-10-28 05:10:56.289403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.787 [2024-10-28 05:10:56.289421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.787 [2024-10-28 05:10:56.305137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.787 [2024-10-28 05:10:56.305173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.787 [2024-10-28 05:10:56.305192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.787 [2024-10-28 05:10:56.317813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.787 [2024-10-28 05:10:56.317858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.787 [2024-10-28 05:10:56.317877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.787 [2024-10-28 05:10:56.332134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.787 [2024-10-28 05:10:56.332169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.787 [2024-10-28 05:10:56.332189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.787 [2024-10-28 05:10:56.346135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.787 [2024-10-28 05:10:56.346170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.787 [2024-10-28 05:10:56.346190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.787 [2024-10-28 05:10:56.358934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.787 [2024-10-28 05:10:56.358983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.787 [2024-10-28 05:10:56.359003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.787 [2024-10-28 05:10:56.374300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:05.787 [2024-10-28 05:10:56.374336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.787 [2024-10-28 05:10:56.374354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.045 [2024-10-28 05:10:56.389501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.045 [2024-10-28 05:10:56.389537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.045 [2024-10-28 05:10:56.389562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.045 [2024-10-28 05:10:56.405489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.045 [2024-10-28 05:10:56.405524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.045 [2024-10-28 05:10:56.405543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.045 [2024-10-28 05:10:56.417395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.046 [2024-10-28 05:10:56.417430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.046 [2024-10-28 05:10:56.417449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.046 [2024-10-28 05:10:56.432078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.046 [2024-10-28 05:10:56.432114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.046 [2024-10-28 05:10:56.432134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.046 [2024-10-28 05:10:56.447295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.046 [2024-10-28 05:10:56.447331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.046 [2024-10-28 05:10:56.447350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.046 [2024-10-28 05:10:56.460945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.046 [2024-10-28 05:10:56.460980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.046 [2024-10-28 05:10:56.460999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.046 [2024-10-28 05:10:56.472323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.046 [2024-10-28 05:10:56.472358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.046 [2024-10-28 05:10:56.472378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.046 [2024-10-28 05:10:56.488848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.046 [2024-10-28 05:10:56.488877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.046 [2024-10-28 05:10:56.488893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.046 [2024-10-28 05:10:56.507368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.046 [2024-10-28 05:10:56.507404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.046 [2024-10-28 05:10:56.507424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.046 [2024-10-28 05:10:56.525206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.046 [2024-10-28 05:10:56.525248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.046 [2024-10-28 05:10:56.525268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.046 [2024-10-28 05:10:56.542361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.046 [2024-10-28 05:10:56.542397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.046 [2024-10-28 05:10:56.542417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.046 [2024-10-28 05:10:56.554425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.046 [2024-10-28 05:10:56.554460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.046 [2024-10-28 05:10:56.554479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.046 [2024-10-28 05:10:56.569260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.046 [2024-10-28 05:10:56.569296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.046 [2024-10-28 05:10:56.569316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.046 [2024-10-28 05:10:56.585379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.046 [2024-10-28 05:10:56.585414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.046 [2024-10-28 05:10:56.585434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.046 [2024-10-28 05:10:56.600135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.046 [2024-10-28 05:10:56.600171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.046 [2024-10-28 05:10:56.600190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.046 [2024-10-28 05:10:56.616178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.046 [2024-10-28 05:10:56.616214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.046 [2024-10-28 05:10:56.616233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.046 [2024-10-28 05:10:56.627564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.046 [2024-10-28 05:10:56.627608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.046 [2024-10-28 05:10:56.627626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.305 17272.00 IOPS, 67.47 MiB/s [2024-10-28T04:10:56.901Z] [2024-10-28 05:10:56.646665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.305 [2024-10-28 05:10:56.646715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.305 [2024-10-28 05:10:56.646733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.305 [2024-10-28 05:10:56.662901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.305 [2024-10-28 05:10:56.662934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.305 [2024-10-28 05:10:56.662969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.305 [2024-10-28 05:10:56.675669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.305 [2024-10-28 05:10:56.675720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.305 [2024-10-28 05:10:56.675736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.305 [2024-10-28 05:10:56.689884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.305 [2024-10-28 05:10:56.689916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.305 [2024-10-28 05:10:56.689949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.305 [2024-10-28 05:10:56.704747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.305 [2024-10-28 05:10:56.704794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.305 [2024-10-28 05:10:56.704811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.305 [2024-10-28 05:10:56.718919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.305 [2024-10-28 05:10:56.718964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.305 [2024-10-28 05:10:56.718981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.306 [2024-10-28 05:10:56.731332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.306 [2024-10-28 05:10:56.731367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.306 [2024-10-28 05:10:56.731386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.306 [2024-10-28 05:10:56.744841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.306 [2024-10-28 05:10:56.744874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.306 [2024-10-28 05:10:56.744891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.306 [2024-10-28 05:10:56.759445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.306 [2024-10-28 05:10:56.759479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.306 [2024-10-28 05:10:56.759498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.306 [2024-10-28 05:10:56.774978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.306 [2024-10-28 05:10:56.775015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.306 [2024-10-28 05:10:56.775040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.306 [2024-10-28 05:10:56.787869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.306 [2024-10-28 05:10:56.787900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.306 [2024-10-28 05:10:56.787915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.306 [2024-10-28 05:10:56.804623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.306 [2024-10-28 05:10:56.804670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.306 [2024-10-28 05:10:56.804705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.306 [2024-10-28 05:10:56.821392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.306 [2024-10-28 05:10:56.821428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.306 [2024-10-28 05:10:56.821448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.306 [2024-10-28 05:10:56.837087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.306 [2024-10-28 05:10:56.837125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.306 [2024-10-28 05:10:56.837145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.306 [2024-10-28 05:10:56.849831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.306 [2024-10-28 05:10:56.849860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.306 [2024-10-28 05:10:56.849876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.306 [2024-10-28 05:10:56.863740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.306 [2024-10-28 05:10:56.863772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.306 [2024-10-28 05:10:56.863789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.306 [2024-10-28 05:10:56.881693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.306 [2024-10-28 05:10:56.881725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.306 [2024-10-28 05:10:56.881741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.306 [2024-10-28 05:10:56.892806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.306 [2024-10-28 05:10:56.892835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.306 [2024-10-28 05:10:56.892850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.564 [2024-10-28 05:10:56.910909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.564 [2024-10-28 05:10:56.910957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.564 [2024-10-28 05:10:56.910977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.564 [2024-10-28 05:10:56.922816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.564 [2024-10-28 05:10:56.922845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.564 [2024-10-28 05:10:56.922861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.564 [2024-10-28 05:10:56.939525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.564 [2024-10-28 05:10:56.939562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.564 [2024-10-28 05:10:56.939581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.564 [2024-10-28 05:10:56.956641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.564 [2024-10-28 05:10:56.956691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.564 [2024-10-28 05:10:56.956708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.564 [2024-10-28 05:10:56.968002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.564 [2024-10-28 05:10:56.968036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.564 [2024-10-28 05:10:56.968056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.564 [2024-10-28 05:10:56.984402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.564 [2024-10-28 05:10:56.984438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.564 [2024-10-28 05:10:56.984457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.564 [2024-10-28 05:10:57.000820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.564 [2024-10-28 05:10:57.000849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.564 [2024-10-28 05:10:57.000866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.564 [2024-10-28 05:10:57.014748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.564 [2024-10-28 05:10:57.014780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.564 [2024-10-28 05:10:57.014798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.564 [2024-10-28 05:10:57.027127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.564 [2024-10-28 05:10:57.027163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.564 [2024-10-28 05:10:57.027190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.564 [2024-10-28 05:10:57.042270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.565 [2024-10-28 05:10:57.042306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.565 [2024-10-28 05:10:57.042325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.565 [2024-10-28 05:10:57.058352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.565 [2024-10-28 05:10:57.058387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.565 [2024-10-28 05:10:57.058407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.565 [2024-10-28 05:10:57.069684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.565 [2024-10-28 05:10:57.069732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.565 [2024-10-28 05:10:57.069749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.565 [2024-10-28 05:10:57.087444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.565 [2024-10-28 05:10:57.087480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.565 [2024-10-28 05:10:57.087499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.565 [2024-10-28 05:10:57.101149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.565 [2024-10-28 05:10:57.101184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.565 [2024-10-28 05:10:57.101204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.565 [2024-10-28 05:10:57.115796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.565 [2024-10-28 05:10:57.115829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.565 [2024-10-28 05:10:57.115846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.565 [2024-10-28 05:10:57.127795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.565 [2024-10-28 05:10:57.127826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.565 [2024-10-28 05:10:57.127858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.565 [2024-10-28 05:10:57.142493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.565 [2024-10-28 05:10:57.142529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.565 [2024-10-28 05:10:57.142547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.565 [2024-10-28 05:10:57.155816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.565 [2024-10-28 05:10:57.155853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.565 [2024-10-28 05:10:57.155871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.823 [2024-10-28 05:10:57.170565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.823 [2024-10-28 05:10:57.170601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.823 [2024-10-28 05:10:57.170619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.823 [2024-10-28 05:10:57.187499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.823 [2024-10-28 05:10:57.187535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.823 [2024-10-28 05:10:57.187554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.823 [2024-10-28 05:10:57.199348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.823 [2024-10-28 05:10:57.199383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.823 [2024-10-28 05:10:57.199403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.823 [2024-10-28 05:10:57.214264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.823 [2024-10-28 05:10:57.214300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.823 [2024-10-28 05:10:57.214319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.823 [2024-10-28 05:10:57.229040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.823 [2024-10-28 05:10:57.229076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.823 [2024-10-28 05:10:57.229095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.823 [2024-10-28 05:10:57.243153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.823 [2024-10-28 05:10:57.243188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.823 [2024-10-28 05:10:57.243207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.823 [2024-10-28 05:10:57.256236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.823 [2024-10-28 05:10:57.256272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.823 [2024-10-28 05:10:57.256291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.823 [2024-10-28 05:10:57.269998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.823 [2024-10-28 05:10:57.270033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.823 [2024-10-28 05:10:57.270052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.823 [2024-10-28 05:10:57.285336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.823 [2024-10-28 05:10:57.285371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.823 [2024-10-28 05:10:57.285390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.823 [2024-10-28 05:10:57.298652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.823 [2024-10-28 05:10:57.298687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.823 [2024-10-28 05:10:57.298720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.823 [2024-10-28 05:10:57.312544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.823 [2024-10-28 05:10:57.312579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.823 [2024-10-28 05:10:57.312598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.823 [2024-10-28 05:10:57.328963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.823 [2024-10-28 05:10:57.329014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.823 [2024-10-28 05:10:57.329033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.823 [2024-10-28 05:10:57.342510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.823 [2024-10-28 05:10:57.342546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.824 [2024-10-28 05:10:57.342565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.824 [2024-10-28 05:10:57.358886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.824 [2024-10-28 05:10:57.358916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.824 [2024-10-28 05:10:57.358946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.824 [2024-10-28 05:10:57.373305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.824 [2024-10-28 05:10:57.373341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.824 [2024-10-28 05:10:57.373360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.824 [2024-10-28 05:10:57.390894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.824 [2024-10-28 05:10:57.390931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.824 [2024-10-28 05:10:57.390948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.824 [2024-10-28 05:10:57.403484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:06.824 [2024-10-28 05:10:57.403520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.824 [2024-10-28 05:10:57.403546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.419289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.419326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.419344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.434213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.434247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.434265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.446323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.446354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.446370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.462870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.462901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.462931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.477353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.477385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.477401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.488901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.488949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.488966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.504107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.504139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.504171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.517157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.517205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.517222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.531136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.531175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.531194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.543765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.543797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.543815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.554798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.554830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.554847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.571292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.571328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.571347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.585748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.585797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.585816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.596375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.596408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.596425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.612428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.612462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.612480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.627558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.627590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.627608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 [2024-10-28 05:10:57.639419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfcebc0) 00:35:07.083 [2024-10-28 05:10:57.639453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.083 [2024-10-28 05:10:57.639472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.083 17479.00 IOPS, 68.28 MiB/s 00:35:07.083 Latency(us) 00:35:07.083 [2024-10-28T04:10:57.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.083 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:07.083 nvme0n1 : 2.00 17503.58 68.37 0.00 0.00 7304.64 3698.34 23844.54 00:35:07.083 [2024-10-28T04:10:57.679Z] =================================================================================================================== 00:35:07.083 [2024-10-28T04:10:57.679Z] Total : 17503.58 68.37 0.00 0.00 7304.64 3698.34 23844.54 00:35:07.083 { 00:35:07.083 "results": [ 00:35:07.083 { 00:35:07.083 "job": "nvme0n1", 00:35:07.083 "core_mask": "0x2", 00:35:07.083 "workload": "randread", 00:35:07.083 "status": "finished", 00:35:07.083 "queue_depth": 128, 00:35:07.083 "io_size": 4096, 00:35:07.083 "runtime": 2.004504, 00:35:07.083 "iops": 17503.58193348579, 00:35:07.083 "mibps": 68.37336692767887, 00:35:07.083 "io_failed": 0, 00:35:07.083 "io_timeout": 0, 00:35:07.083 "avg_latency_us": 7304.6404408458675, 00:35:07.083 "min_latency_us": 3698.3374791163915, 00:35:07.083 "max_latency_us": 23844.544273250416 00:35:07.083 } 00:35:07.083 ], 00:35:07.083 "core_count": 1 00:35:07.083 } 00:35:07.083 05:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:07.083 05:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:07.083 05:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:07.083 | .driver_specific 00:35:07.083 | .nvme_error 00:35:07.083 | .status_code 00:35:07.083 | .command_transient_transport_error' 00:35:07.083 05:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:07.342 05:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 137 > 0 )) 00:35:07.342 05:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2472746 00:35:07.342 05:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2472746 ']' 00:35:07.342 05:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2472746 00:35:07.342 05:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:07.601 05:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:07.601 05:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2472746 00:35:07.601 05:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:07.601 05:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:07.601 05:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2472746' 00:35:07.601 killing process with pid 2472746 00:35:07.601 05:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2472746 00:35:07.601 Received shutdown signal, test time was about 2.000000 seconds 00:35:07.601 00:35:07.601 Latency(us) 00:35:07.601 [2024-10-28T04:10:58.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.601 [2024-10-28T04:10:58.197Z] =================================================================================================================== 00:35:07.601 [2024-10-28T04:10:58.197Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:07.601 05:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2472746 00:35:07.601 05:10:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:07.601 05:10:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:07.601 05:10:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:07.601 05:10:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:07.601 05:10:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:07.601 05:10:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2473263 00:35:07.601 05:10:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:07.601 05:10:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2473263 /var/tmp/bperf.sock 00:35:07.601 05:10:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2473263 ']' 00:35:07.601 05:10:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:07.601 05:10:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:07.601 05:10:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:07.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:07.601 05:10:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:07.601 05:10:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:07.860 [2024-10-28 05:10:58.213404] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:35:07.860 [2024-10-28 05:10:58.213500] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473263 ] 00:35:07.860 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:07.860 Zero copy mechanism will not be used. 00:35:07.860 [2024-10-28 05:10:58.345108] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:07.860 [2024-10-28 05:10:58.380858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:07.860 [2024-10-28 05:10:58.427026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.797 05:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:08.797 05:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:08.797 05:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:08.797 05:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:09.055 05:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:09.055 05:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.055 05:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.055 05:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.055 05:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.055 05:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.620 nvme0n1 00:35:09.620 05:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:09.620 05:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.620 05:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.620 05:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.620 05:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:09.620 05:10:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:09.620 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:09.620 Zero copy mechanism will not be used. 00:35:09.620 Running I/O for 2 seconds... 00:35:09.620 [2024-10-28 05:11:00.127906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.620 [2024-10-28 05:11:00.127983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.620 [2024-10-28 05:11:00.128007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:09.620 [2024-10-28 05:11:00.138039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.620 [2024-10-28 05:11:00.138077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.620 [2024-10-28 05:11:00.138097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:09.620 [2024-10-28 05:11:00.148025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.620 [2024-10-28 05:11:00.148063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.620 [2024-10-28 05:11:00.148089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:09.620 [2024-10-28 05:11:00.157923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.620 [2024-10-28 05:11:00.157963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.620 [2024-10-28 05:11:00.157998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.620 [2024-10-28 05:11:00.167722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.620 [2024-10-28 05:11:00.167768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.620 [2024-10-28 05:11:00.167793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:09.620 [2024-10-28 05:11:00.177598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.620 [2024-10-28 05:11:00.177648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.620 [2024-10-28 05:11:00.177685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:09.620 [2024-10-28 05:11:00.187386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.620 [2024-10-28 05:11:00.187422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.620 [2024-10-28 05:11:00.187446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:09.620 [2024-10-28 05:11:00.195660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.620 [2024-10-28 05:11:00.195708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.620 [2024-10-28 05:11:00.195725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.620 [2024-10-28 05:11:00.204824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.620 [2024-10-28 05:11:00.204872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.620 [2024-10-28 05:11:00.204890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:09.620 [2024-10-28 05:11:00.212905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.620 [2024-10-28 05:11:00.212956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.620 [2024-10-28 05:11:00.212976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:09.879 [2024-10-28 05:11:00.220855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.879 [2024-10-28 05:11:00.220886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.879 [2024-10-28 05:11:00.220903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:09.879 [2024-10-28 05:11:00.228855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.879 [2024-10-28 05:11:00.228884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.879 [2024-10-28 05:11:00.228902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.879 [2024-10-28 05:11:00.236744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.879 [2024-10-28 05:11:00.236776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.879 [2024-10-28 05:11:00.236793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:09.879 [2024-10-28 05:11:00.244785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.879 [2024-10-28 05:11:00.244816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.879 [2024-10-28 05:11:00.244833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:09.879 [2024-10-28 05:11:00.252569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.879 [2024-10-28 05:11:00.252603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.879 [2024-10-28 05:11:00.252622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:09.879 [2024-10-28 05:11:00.260438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.879 [2024-10-28 05:11:00.260471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.879 [2024-10-28 05:11:00.260500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.879 [2024-10-28 05:11:00.268442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.879 [2024-10-28 05:11:00.268476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.879 [2024-10-28 05:11:00.268495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:09.879 [2024-10-28 05:11:00.276236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.879 [2024-10-28 05:11:00.276271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.879 [2024-10-28 05:11:00.276297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:09.879 [2024-10-28 05:11:00.284363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.879 [2024-10-28 05:11:00.284398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.879 [2024-10-28 05:11:00.284416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:09.879 [2024-10-28 05:11:00.291894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.879 [2024-10-28 05:11:00.291938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.879 [2024-10-28 05:11:00.291955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.879 [2024-10-28 05:11:00.299589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.879 [2024-10-28 05:11:00.299631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.299663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.307512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.307546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.307564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.315487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.315520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.315538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.323499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.323532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.323554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.331525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.331565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.331585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.339313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.339347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.339366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.347153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.347186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.347213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.355121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.355156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.355175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.362998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.363033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.363052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.370808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.370838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.370855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.378700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.378728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.378749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.386723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.386758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.386778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.394504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.394538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.394561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.402402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.402435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.402454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.410181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.410214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.410235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.418175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.418208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.418234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.425989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.426022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.426041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.433703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.433733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.433752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.441730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.441758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.441776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.449447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.449480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.449505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.457203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.457237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.457257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.465119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.465153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.880 [2024-10-28 05:11:00.465181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:09.880 [2024-10-28 05:11:00.472919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:09.880 [2024-10-28 05:11:00.472949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.881 [2024-10-28 05:11:00.472966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.139 [2024-10-28 05:11:00.480836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.139 [2024-10-28 05:11:00.480865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.139 [2024-10-28 05:11:00.480884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.139 [2024-10-28 05:11:00.488534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.139 [2024-10-28 05:11:00.488567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.139 [2024-10-28 05:11:00.488585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.139 [2024-10-28 05:11:00.496197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.139 [2024-10-28 05:11:00.496229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.139 [2024-10-28 05:11:00.496248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.139 [2024-10-28 05:11:00.503964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.139 [2024-10-28 05:11:00.504011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.504036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.511982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.512016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.512039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.519778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.519808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.519824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.527464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.527497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.527515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.535276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.535310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.535334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.543093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.543126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.543146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.550887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.550916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.550932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.558792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.558821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.558838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.566574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.566608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.566628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.574246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.574279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.574298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.581968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.582014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.582035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.589616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.589657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.589698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.597348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.597381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.597411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.605040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.605074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.605092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.612795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.612822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.612841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.620787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.620816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.620834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.628656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.628702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.628723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.636454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.636487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.636506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.644125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.644158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.644178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.651905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.651948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.651968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.659718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.659746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.659765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.667429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.667468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.667494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.675193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.675226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.675245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.140 [2024-10-28 05:11:00.683019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.140 [2024-10-28 05:11:00.683052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.140 [2024-10-28 05:11:00.683078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.141 [2024-10-28 05:11:00.690735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.141 [2024-10-28 05:11:00.690764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.141 [2024-10-28 05:11:00.690791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.141 [2024-10-28 05:11:00.698498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.141 [2024-10-28 05:11:00.698532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.141 [2024-10-28 05:11:00.698552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.141 [2024-10-28 05:11:00.706284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.141 [2024-10-28 05:11:00.706317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.141 [2024-10-28 05:11:00.706336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.141 [2024-10-28 05:11:00.714101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.141 [2024-10-28 05:11:00.714133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.141 [2024-10-28 05:11:00.714153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.141 [2024-10-28 05:11:00.722026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.141 [2024-10-28 05:11:00.722059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.141 [2024-10-28 05:11:00.722079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.141 [2024-10-28 05:11:00.729801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.141 [2024-10-28 05:11:00.729831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.141 [2024-10-28 05:11:00.729848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.399 [2024-10-28 05:11:00.737584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.737617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.737653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.745298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.745331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.745349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.753058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.753090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.753111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.760795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.760823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.760844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.768545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.768578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.768596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.776265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.776298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.776316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.784053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.784086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.784105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.791853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.791880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.791901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.799683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.799711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.799740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.807329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.807363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.807381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.815127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.815160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.815180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.822844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.822874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.822895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.830656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.830700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.830718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.839030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.839065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.839085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.848930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.848987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.849007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.859176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.859212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.859231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.869197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.869233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.869252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.878351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.878387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.878407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.888322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.888358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.888377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.897317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.897352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.897372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.907247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.907283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.907302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.917180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.917217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.917236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.927193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.400 [2024-10-28 05:11:00.927229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.400 [2024-10-28 05:11:00.927249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.400 [2024-10-28 05:11:00.936516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.401 [2024-10-28 05:11:00.936552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.401 [2024-10-28 05:11:00.936572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.401 [2024-10-28 05:11:00.945851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.401 [2024-10-28 05:11:00.945884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.401 [2024-10-28 05:11:00.945902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.401 [2024-10-28 05:11:00.955400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.401 [2024-10-28 05:11:00.955436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.401 [2024-10-28 05:11:00.955461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.401 [2024-10-28 05:11:00.965557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.401 [2024-10-28 05:11:00.965594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.401 [2024-10-28 05:11:00.965614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.401 [2024-10-28 05:11:00.974513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.401 [2024-10-28 05:11:00.974549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.401 [2024-10-28 05:11:00.974569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.401 [2024-10-28 05:11:00.984302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.401 [2024-10-28 05:11:00.984338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.401 [2024-10-28 05:11:00.984358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.401 [2024-10-28 05:11:00.993447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.401 [2024-10-28 05:11:00.993484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.401 [2024-10-28 05:11:00.993503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.659 [2024-10-28 05:11:01.002497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.659 [2024-10-28 05:11:01.002533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.659 [2024-10-28 05:11:01.002552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.659 [2024-10-28 05:11:01.011157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.659 [2024-10-28 05:11:01.011194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.659 [2024-10-28 05:11:01.011213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.018730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.018778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.018796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.026888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.026920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.026938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.036643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.036685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.036719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.045883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.045916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.045933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.055320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.055356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.055376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.064330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.064367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.064387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.069468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.069503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.069522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.077864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.077912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.077929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.086491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.086527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.086547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.094369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.094404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.094424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.102242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.102277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.102296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.110045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.110080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.110099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.660 3720.00 IOPS, 465.00 MiB/s [2024-10-28T04:11:01.256Z] [2024-10-28 05:11:01.118902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.118947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.118964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.126909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.126938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.126970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.134911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.134953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.134974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.142792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.142822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.142838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.150795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.150825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.150841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.158630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.158687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.158704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.166526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.166560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.166579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.174314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.174349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.174375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.182052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.182086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.660 [2024-10-28 05:11:01.182105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.660 [2024-10-28 05:11:01.189902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.660 [2024-10-28 05:11:01.189946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.661 [2024-10-28 05:11:01.189966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.661 [2024-10-28 05:11:01.197772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.661 [2024-10-28 05:11:01.197803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.661 [2024-10-28 05:11:01.197820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.661 [2024-10-28 05:11:01.205605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.661 [2024-10-28 05:11:01.205647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.661 [2024-10-28 05:11:01.205683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.661 [2024-10-28 05:11:01.213908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.661 [2024-10-28 05:11:01.213954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.661 [2024-10-28 05:11:01.213970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.661 [2024-10-28 05:11:01.221850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.661 [2024-10-28 05:11:01.221895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.661 [2024-10-28 05:11:01.221912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.661 [2024-10-28 05:11:01.229762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.661 [2024-10-28 05:11:01.229790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.661 [2024-10-28 05:11:01.229806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.661 [2024-10-28 05:11:01.237598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.661 [2024-10-28 05:11:01.237632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.661 [2024-10-28 05:11:01.237661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.661 [2024-10-28 05:11:01.245533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.661 [2024-10-28 05:11:01.245576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.661 [2024-10-28 05:11:01.245596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.661 [2024-10-28 05:11:01.253391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.920 [2024-10-28 05:11:01.253427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.920 [2024-10-28 05:11:01.253446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.920 [2024-10-28 05:11:01.261132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.920 [2024-10-28 05:11:01.261167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.920 [2024-10-28 05:11:01.261186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.920 [2024-10-28 05:11:01.269041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.920 [2024-10-28 05:11:01.269076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.920 [2024-10-28 05:11:01.269096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.920 [2024-10-28 05:11:01.276977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.920 [2024-10-28 05:11:01.277011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.920 [2024-10-28 05:11:01.277030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.920 [2024-10-28 05:11:01.284767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.920 [2024-10-28 05:11:01.284797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.920 [2024-10-28 05:11:01.284813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.920 [2024-10-28 05:11:01.292443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.920 [2024-10-28 05:11:01.292477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.292495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.300375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.300408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.300426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.308107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.308141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.308159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.316191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.316226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.316245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.324058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.324095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.324114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.331868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.331909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.331925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.339928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.339958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.339991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.347751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.347782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.347799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.355438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.355472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.355490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.363219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.363254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.363273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.371017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.371047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.371064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.378826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.378855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.378878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.386654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.386708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.386724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.395656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.395707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.395724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.405357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.405394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.405414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.415347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.415384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.415404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.425284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.425320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.425339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.435472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.435509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.435528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.446090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.446127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.446146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.455526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.455563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.455583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.465987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.466030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.466050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.476280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.476317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.921 [2024-10-28 05:11:01.476337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.921 [2024-10-28 05:11:01.486790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.921 [2024-10-28 05:11:01.486821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.922 [2024-10-28 05:11:01.486837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.922 [2024-10-28 05:11:01.497429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.922 [2024-10-28 05:11:01.497467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.922 [2024-10-28 05:11:01.497486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.922 [2024-10-28 05:11:01.506542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:10.922 [2024-10-28 05:11:01.506579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.922 [2024-10-28 05:11:01.506598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.181 [2024-10-28 05:11:01.515407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.181 [2024-10-28 05:11:01.515445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.181 [2024-10-28 05:11:01.515465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.181 [2024-10-28 05:11:01.523890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.181 [2024-10-28 05:11:01.523922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.181 [2024-10-28 05:11:01.523940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.181 [2024-10-28 05:11:01.533625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.181 [2024-10-28 05:11:01.533684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.181 [2024-10-28 05:11:01.533703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.181 [2024-10-28 05:11:01.543157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.181 [2024-10-28 05:11:01.543193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.181 [2024-10-28 05:11:01.543213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.551766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.551799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.551816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.561021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.561058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.561078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.571074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.571110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.571130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.579715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.579747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.579764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.589035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.589071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.589090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.598351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.598387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.598406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.608430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.608465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.608484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.618354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.618390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.618409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.628212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.628249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.628275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.637159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.637195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.637215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.646406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.646441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.646460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.656263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.656300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.656319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.664627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.664686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.664704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.673751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.673785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.673803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.682241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.682278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.682297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.691653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.691703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.691721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.700981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.701017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.701036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.709616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.709675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.709693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.717451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.717486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.717506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.725318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.725352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.725371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.733211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.733246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.182 [2024-10-28 05:11:01.733265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.182 [2024-10-28 05:11:01.741011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.182 [2024-10-28 05:11:01.741045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.183 [2024-10-28 05:11:01.741064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.183 [2024-10-28 05:11:01.748780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.183 [2024-10-28 05:11:01.748811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.183 [2024-10-28 05:11:01.748828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.183 [2024-10-28 05:11:01.756662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.183 [2024-10-28 05:11:01.756725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.183 [2024-10-28 05:11:01.756742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.183 [2024-10-28 05:11:01.764542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.183 [2024-10-28 05:11:01.764577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.183 [2024-10-28 05:11:01.764595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.183 [2024-10-28 05:11:01.772290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.183 [2024-10-28 05:11:01.772325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.183 [2024-10-28 05:11:01.772350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.780167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.780202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.780222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.787897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.787929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.787945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.795794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.795825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.795843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.803587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.803621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.803649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.811658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.811704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.811722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.819558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.819592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.819611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.827560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.827595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.827614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.835552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.835586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.835605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.843681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.843733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.843751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.851503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.851537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.851556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.859236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.859281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.859300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.867010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.867044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.867063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.874841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.874872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.874888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.882738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.882769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.882786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.890735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.890767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.890784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.898624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.898681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.898699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.906400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.906434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.906453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.914203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.914237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.442 [2024-10-28 05:11:01.914256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.442 [2024-10-28 05:11:01.922033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.442 [2024-10-28 05:11:01.922067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.443 [2024-10-28 05:11:01.922086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.443 [2024-10-28 05:11:01.929860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.443 [2024-10-28 05:11:01.929891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.443 [2024-10-28 05:11:01.929908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.443 [2024-10-28 05:11:01.937924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.443 [2024-10-28 05:11:01.937972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.443 [2024-10-28 05:11:01.937991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.443 [2024-10-28 05:11:01.945755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.443 [2024-10-28 05:11:01.945786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.443 [2024-10-28 05:11:01.945803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.443 [2024-10-28 05:11:01.953526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.443 [2024-10-28 05:11:01.953559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.443 [2024-10-28 05:11:01.953577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.443 [2024-10-28 05:11:01.961260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.443 [2024-10-28 05:11:01.961295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.443 [2024-10-28 05:11:01.961313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.443 [2024-10-28 05:11:01.969038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.443 [2024-10-28 05:11:01.969073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.443 [2024-10-28 05:11:01.969091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.443 [2024-10-28 05:11:01.976811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.443 [2024-10-28 05:11:01.976842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.443 [2024-10-28 05:11:01.976865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.443 [2024-10-28 05:11:01.984630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.443 [2024-10-28 05:11:01.984685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.443 [2024-10-28 05:11:01.984703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.443 [2024-10-28 05:11:01.992413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.443 [2024-10-28 05:11:01.992447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.443 [2024-10-28 05:11:01.992466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.443 [2024-10-28 05:11:02.000247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.443 [2024-10-28 05:11:02.000281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.443 [2024-10-28 05:11:02.000299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.443 [2024-10-28 05:11:02.008028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.443 [2024-10-28 05:11:02.008061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.443 [2024-10-28 05:11:02.008079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.443 [2024-10-28 05:11:02.015954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.443 [2024-10-28 05:11:02.015989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.443 [2024-10-28 05:11:02.016008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.443 [2024-10-28 05:11:02.023757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.443 [2024-10-28 05:11:02.023788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.443 [2024-10-28 05:11:02.023805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.443 [2024-10-28 05:11:02.031528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.443 [2024-10-28 05:11:02.031562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.443 [2024-10-28 05:11:02.031581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.702 [2024-10-28 05:11:02.039326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.702 [2024-10-28 05:11:02.039360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.702 [2024-10-28 05:11:02.039378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.702 [2024-10-28 05:11:02.047062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.702 [2024-10-28 05:11:02.047102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.702 [2024-10-28 05:11:02.047122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.702 [2024-10-28 05:11:02.054952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.702 [2024-10-28 05:11:02.054986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.702 [2024-10-28 05:11:02.055004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.702 [2024-10-28 05:11:02.062727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.702 [2024-10-28 05:11:02.062758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.702 [2024-10-28 05:11:02.062775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.702 [2024-10-28 05:11:02.070504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.702 [2024-10-28 05:11:02.070537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.702 [2024-10-28 05:11:02.070556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.702 [2024-10-28 05:11:02.078268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.702 [2024-10-28 05:11:02.078301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.702 [2024-10-28 05:11:02.078320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.702 [2024-10-28 05:11:02.086037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.702 [2024-10-28 05:11:02.086071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.702 [2024-10-28 05:11:02.086090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:11.702 [2024-10-28 05:11:02.093734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.702 [2024-10-28 05:11:02.093764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.702 [2024-10-28 05:11:02.093781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:11.702 [2024-10-28 05:11:02.101500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.702 [2024-10-28 05:11:02.101532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.702 [2024-10-28 05:11:02.101551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:11.702 [2024-10-28 05:11:02.109883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cc240) 00:35:11.702 [2024-10-28 05:11:02.109915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.702 [2024-10-28 05:11:02.109932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.702 3726.00 IOPS, 465.75 MiB/s 00:35:11.702 Latency(us) 00:35:11.702 [2024-10-28T04:11:02.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.702 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:11.702 nvme0n1 : 2.00 3725.40 465.68 0.00 0.00 4289.93 1350.38 12749.53 00:35:11.702 [2024-10-28T04:11:02.298Z] =================================================================================================================== 00:35:11.702 [2024-10-28T04:11:02.298Z] Total : 3725.40 465.68 0.00 0.00 4289.93 1350.38 12749.53 00:35:11.702 { 00:35:11.702 "results": [ 00:35:11.702 { 00:35:11.702 "job": "nvme0n1", 00:35:11.702 "core_mask": "0x2", 00:35:11.702 "workload": "randread", 00:35:11.702 "status": "finished", 00:35:11.702 "queue_depth": 16, 00:35:11.702 "io_size": 131072, 00:35:11.702 "runtime": 2.004615, 00:35:11.702 "iops": 3725.403631121188, 00:35:11.702 "mibps": 465.6754538901485, 00:35:11.702 "io_failed": 0, 00:35:11.702 "io_timeout": 0, 00:35:11.702 "avg_latency_us": 4289.926052413073, 00:35:11.702 "min_latency_us": 1350.379803229998, 00:35:11.702 "max_latency_us": 12749.531835901244 00:35:11.702 } 00:35:11.702 ], 00:35:11.702 "core_count": 1 00:35:11.702 } 00:35:11.702 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:11.702 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:11.702 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:11.702 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:11.702 | .driver_specific 00:35:11.702 | .nvme_error 00:35:11.702 | .status_code 00:35:11.702 | .command_transient_transport_error' 00:35:11.961 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 240 > 0 )) 00:35:11.961 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2473263 00:35:11.961 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2473263 ']' 00:35:11.961 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2473263 00:35:11.961 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:11.961 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:11.961 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2473263 00:35:11.961 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:11.961 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:11.961 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2473263' 00:35:11.961 killing process with pid 2473263 00:35:11.961 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2473263 00:35:11.961 Received shutdown signal, test time was about 2.000000 seconds 00:35:11.961 00:35:11.961 Latency(us) 00:35:11.961 [2024-10-28T04:11:02.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.961 [2024-10-28T04:11:02.557Z] =================================================================================================================== 00:35:11.961 [2024-10-28T04:11:02.557Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:11.961 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2473263 00:35:12.220 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:12.220 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:12.220 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:12.220 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:12.220 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:12.220 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2473782 00:35:12.220 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:12.220 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2473782 /var/tmp/bperf.sock 00:35:12.220 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2473782 ']' 00:35:12.220 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:12.220 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:12.220 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:12.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:12.220 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:12.220 05:11:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.220 [2024-10-28 05:11:02.681927] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:35:12.220 [2024-10-28 05:11:02.682028] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473782 ] 00:35:12.478 [2024-10-28 05:11:02.814416] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:12.478 [2024-10-28 05:11:02.851097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.478 [2024-10-28 05:11:02.897267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:13.434 05:11:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:13.434 05:11:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:13.434 05:11:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:13.434 05:11:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:13.773 05:11:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:13.773 05:11:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.773 05:11:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:13.773 05:11:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.773 05:11:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:13.773 05:11:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:14.030 nvme0n1 00:35:14.030 05:11:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:14.030 05:11:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.030 05:11:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:14.030 05:11:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.030 05:11:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:14.030 05:11:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:14.030 Running I/O for 2 seconds... 00:35:14.030 [2024-10-28 05:11:04.561024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.030 [2024-10-28 05:11:04.561349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.030 [2024-10-28 05:11:04.561428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.030 [2024-10-28 05:11:04.574802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.030 [2024-10-28 05:11:04.575083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.031 [2024-10-28 05:11:04.575113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.031 [2024-10-28 05:11:04.587363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.031 [2024-10-28 05:11:04.587610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.031 [2024-10-28 05:11:04.587662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.031 [2024-10-28 05:11:04.599840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.031 [2024-10-28 05:11:04.600154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.031 [2024-10-28 05:11:04.600199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.031 [2024-10-28 05:11:04.613213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.031 [2024-10-28 05:11:04.613527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.031 [2024-10-28 05:11:04.613579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.287 [2024-10-28 05:11:04.626335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.287 [2024-10-28 05:11:04.626630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.287 [2024-10-28 05:11:04.626669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.287 [2024-10-28 05:11:04.639517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.287 [2024-10-28 05:11:04.639813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.287 [2024-10-28 05:11:04.639843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.287 [2024-10-28 05:11:04.652562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.287 [2024-10-28 05:11:04.652805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.287 [2024-10-28 05:11:04.652842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.287 [2024-10-28 05:11:04.665147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.287 [2024-10-28 05:11:04.665445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.287 [2024-10-28 05:11:04.665500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.287 [2024-10-28 05:11:04.678068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.287 [2024-10-28 05:11:04.678377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.287 [2024-10-28 05:11:04.678406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.287 [2024-10-28 05:11:04.690979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.287 [2024-10-28 05:11:04.691293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.287 [2024-10-28 05:11:04.691347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.287 [2024-10-28 05:11:04.704045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.287 [2024-10-28 05:11:04.704304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.287 [2024-10-28 05:11:04.704333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.287 [2024-10-28 05:11:04.716693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.287 [2024-10-28 05:11:04.717023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.287 [2024-10-28 05:11:04.717081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.287 [2024-10-28 05:11:04.729903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.287 [2024-10-28 05:11:04.730131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.287 [2024-10-28 05:11:04.730160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.287 [2024-10-28 05:11:04.742574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.287 [2024-10-28 05:11:04.742871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.287 [2024-10-28 05:11:04.742901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.287 [2024-10-28 05:11:04.756151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.287 [2024-10-28 05:11:04.756461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.287 [2024-10-28 05:11:04.756490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.287 [2024-10-28 05:11:04.769684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.287 [2024-10-28 05:11:04.769984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.287 [2024-10-28 05:11:04.770013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.287 [2024-10-28 05:11:04.782974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.287 [2024-10-28 05:11:04.783204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.287 [2024-10-28 05:11:04.783233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.287 [2024-10-28 05:11:04.795448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.287 [2024-10-28 05:11:04.795764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.287 [2024-10-28 05:11:04.795794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.287 [2024-10-28 05:11:04.807866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.288 [2024-10-28 05:11:04.808178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.288 [2024-10-28 05:11:04.808206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.288 [2024-10-28 05:11:04.820431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.288 [2024-10-28 05:11:04.820768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.288 [2024-10-28 05:11:04.820799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.288 [2024-10-28 05:11:04.833872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.288 [2024-10-28 05:11:04.834173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.288 [2024-10-28 05:11:04.834202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.288 [2024-10-28 05:11:04.846979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.288 [2024-10-28 05:11:04.847274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.288 [2024-10-28 05:11:04.847303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.288 [2024-10-28 05:11:04.859845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.288 [2024-10-28 05:11:04.860158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.288 [2024-10-28 05:11:04.860186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.288 [2024-10-28 05:11:04.872715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.288 [2024-10-28 05:11:04.872952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.288 [2024-10-28 05:11:04.872980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:04.885768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:04.886075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:04.886105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:04.898939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:04.899170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:04.899199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:04.911385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:04.911693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:04.911723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:04.924262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:04.924495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:04.924542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:04.936753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:04.937054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:04.937083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:04.949689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:04.949934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:04.949977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:04.962012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:04.962298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:04.962326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:04.974454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:04.974779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:04.974841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:04.987567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:04.987837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:04.987871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:05.000051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:05.000355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:05.000383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:05.013071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:05.013381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:05.013409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:05.025469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:05.025718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:05.025763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:05.038339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:05.038642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:05.038671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:05.050885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:05.051181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:05.051209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:05.063643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:05.063877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:05.063920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:05.076248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:05.076555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:05.076583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:05.089733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:05.090037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:05.090101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:05.102284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:05.102567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:05.102601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:05.114978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:05.115220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:05.115249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.545 [2024-10-28 05:11:05.127391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.545 [2024-10-28 05:11:05.127617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.545 [2024-10-28 05:11:05.127654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.802 [2024-10-28 05:11:05.140074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.802 [2024-10-28 05:11:05.140304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.802 [2024-10-28 05:11:05.140332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.802 [2024-10-28 05:11:05.152544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.152814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.152843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.165102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.165332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.165360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.177101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.177344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.177387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.189356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.189609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.189644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.201789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.202104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.202133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.214336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.214662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.214706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.226740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.227080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.227108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.239298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.239524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.239570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.252327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.252644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.252702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.264892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.265202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.265231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.277898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.278206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.278261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.290559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.290883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.290912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.303407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.303715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.303744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.316166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.316392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.316421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.328341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.328609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.328645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.342031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.342362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.342422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.354825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.355113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.355142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.367138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.367381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.367409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.379561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.379827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.379856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:14.803 [2024-10-28 05:11:05.391950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:14.803 [2024-10-28 05:11:05.392247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.803 [2024-10-28 05:11:05.392276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.061 [2024-10-28 05:11:05.405456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.061 [2024-10-28 05:11:05.405721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.061 [2024-10-28 05:11:05.405750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.061 [2024-10-28 05:11:05.417930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.061 [2024-10-28 05:11:05.418231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.061 [2024-10-28 05:11:05.418292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.061 [2024-10-28 05:11:05.430912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.061 [2024-10-28 05:11:05.431220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.061 [2024-10-28 05:11:05.431295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.061 [2024-10-28 05:11:05.443400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.061 [2024-10-28 05:11:05.443630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.061 [2024-10-28 05:11:05.443667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.061 [2024-10-28 05:11:05.456343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.061 [2024-10-28 05:11:05.456658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.061 [2024-10-28 05:11:05.456687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.061 [2024-10-28 05:11:05.468757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.061 [2024-10-28 05:11:05.469001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.061 [2024-10-28 05:11:05.469029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.061 [2024-10-28 05:11:05.481692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.061 [2024-10-28 05:11:05.481995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.061 [2024-10-28 05:11:05.482023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.061 [2024-10-28 05:11:05.494331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.062 [2024-10-28 05:11:05.494622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.062 [2024-10-28 05:11:05.494660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.062 [2024-10-28 05:11:05.507204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.062 [2024-10-28 05:11:05.507504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.062 [2024-10-28 05:11:05.507532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.062 [2024-10-28 05:11:05.519795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.062 [2024-10-28 05:11:05.520031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.062 [2024-10-28 05:11:05.520060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.062 [2024-10-28 05:11:05.532301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.062 [2024-10-28 05:11:05.532559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.062 [2024-10-28 05:11:05.532605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.062 [2024-10-28 05:11:05.544991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.062 19840.00 IOPS, 77.50 MiB/s [2024-10-28T04:11:05.658Z] [2024-10-28 05:11:05.545541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.062 [2024-10-28 05:11:05.545582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.062 [2024-10-28 05:11:05.557583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.062 [2024-10-28 05:11:05.557892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.062 [2024-10-28 05:11:05.557921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.062 [2024-10-28 05:11:05.569957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.062 [2024-10-28 05:11:05.570187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.062 [2024-10-28 05:11:05.570214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.062 [2024-10-28 05:11:05.582932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.062 [2024-10-28 05:11:05.583240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.062 [2024-10-28 05:11:05.583285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.062 [2024-10-28 05:11:05.595835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.062 [2024-10-28 05:11:05.596120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.062 [2024-10-28 05:11:05.596149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.062 [2024-10-28 05:11:05.608205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.062 [2024-10-28 05:11:05.608544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.062 [2024-10-28 05:11:05.608572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.062 [2024-10-28 05:11:05.620549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.062 [2024-10-28 05:11:05.620790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.062 [2024-10-28 05:11:05.620819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.062 [2024-10-28 05:11:05.633089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.062 [2024-10-28 05:11:05.633406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.062 [2024-10-28 05:11:05.633435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.062 [2024-10-28 05:11:05.645788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.062 [2024-10-28 05:11:05.646021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.062 [2024-10-28 05:11:05.646049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.320 [2024-10-28 05:11:05.658629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.320 [2024-10-28 05:11:05.658927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.320 [2024-10-28 05:11:05.658957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.320 [2024-10-28 05:11:05.671568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.320 [2024-10-28 05:11:05.671851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.320 [2024-10-28 05:11:05.671880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.320 [2024-10-28 05:11:05.684082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.320 [2024-10-28 05:11:05.684374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.320 [2024-10-28 05:11:05.684402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.320 [2024-10-28 05:11:05.696615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.320 [2024-10-28 05:11:05.696855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.320 [2024-10-28 05:11:05.696884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.320 [2024-10-28 05:11:05.709403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.320 [2024-10-28 05:11:05.709707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.320 [2024-10-28 05:11:05.709736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.320 [2024-10-28 05:11:05.722199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.320 [2024-10-28 05:11:05.722430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.320 [2024-10-28 05:11:05.722458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.320 [2024-10-28 05:11:05.734952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.320 [2024-10-28 05:11:05.735256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.320 [2024-10-28 05:11:05.735284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.320 [2024-10-28 05:11:05.747716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.320 [2024-10-28 05:11:05.747973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.321 [2024-10-28 05:11:05.748003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.321 [2024-10-28 05:11:05.760332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.321 [2024-10-28 05:11:05.760640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.321 [2024-10-28 05:11:05.760705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.321 [2024-10-28 05:11:05.773463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.321 [2024-10-28 05:11:05.773704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.321 [2024-10-28 05:11:05.773733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.321 [2024-10-28 05:11:05.786014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.321 [2024-10-28 05:11:05.786338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.321 [2024-10-28 05:11:05.786397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.321 [2024-10-28 05:11:05.798945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.321 [2024-10-28 05:11:05.799175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.321 [2024-10-28 05:11:05.799204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.321 [2024-10-28 05:11:05.811493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.321 [2024-10-28 05:11:05.811790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.321 [2024-10-28 05:11:05.811819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.321 [2024-10-28 05:11:05.824491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.321 [2024-10-28 05:11:05.824783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.321 [2024-10-28 05:11:05.824812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.321 [2024-10-28 05:11:05.837183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.321 [2024-10-28 05:11:05.837510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.321 [2024-10-28 05:11:05.837557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.321 [2024-10-28 05:11:05.850246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.321 [2024-10-28 05:11:05.850493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.321 [2024-10-28 05:11:05.850520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.321 [2024-10-28 05:11:05.862428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.321 [2024-10-28 05:11:05.862758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.321 [2024-10-28 05:11:05.862787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.321 [2024-10-28 05:11:05.874768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.321 [2024-10-28 05:11:05.875109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.321 [2024-10-28 05:11:05.875166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.321 [2024-10-28 05:11:05.887161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.321 [2024-10-28 05:11:05.887484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.321 [2024-10-28 05:11:05.887514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.321 [2024-10-28 05:11:05.900160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.321 [2024-10-28 05:11:05.900389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.321 [2024-10-28 05:11:05.900417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.321 [2024-10-28 05:11:05.912665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.321 [2024-10-28 05:11:05.912906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.321 [2024-10-28 05:11:05.912936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.579 [2024-10-28 05:11:05.925839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.579 [2024-10-28 05:11:05.926153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.579 [2024-10-28 05:11:05.926182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.579 [2024-10-28 05:11:05.938790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.579 [2024-10-28 05:11:05.939128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.579 [2024-10-28 05:11:05.939156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.579 [2024-10-28 05:11:05.951453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.579 [2024-10-28 05:11:05.951712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.579 [2024-10-28 05:11:05.951742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.579 [2024-10-28 05:11:05.964308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.579 [2024-10-28 05:11:05.964615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.579 [2024-10-28 05:11:05.964669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.580 [2024-10-28 05:11:05.976857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.580 [2024-10-28 05:11:05.977178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.580 [2024-10-28 05:11:05.977207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.580 [2024-10-28 05:11:05.989772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.580 [2024-10-28 05:11:05.990133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.580 [2024-10-28 05:11:05.990162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.580 [2024-10-28 05:11:06.002198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.580 [2024-10-28 05:11:06.002498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.580 [2024-10-28 05:11:06.002527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.580 [2024-10-28 05:11:06.015159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.580 [2024-10-28 05:11:06.015391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.580 [2024-10-28 05:11:06.015420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.580 [2024-10-28 05:11:06.027518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.580 [2024-10-28 05:11:06.027838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.580 [2024-10-28 05:11:06.027866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.580 [2024-10-28 05:11:06.040415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.580 [2024-10-28 05:11:06.040715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.580 [2024-10-28 05:11:06.040744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.580 [2024-10-28 05:11:06.052605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.580 [2024-10-28 05:11:06.052842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.580 [2024-10-28 05:11:06.052870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.580 [2024-10-28 05:11:06.065659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.580 [2024-10-28 05:11:06.065920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.580 [2024-10-28 05:11:06.065949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.580 [2024-10-28 05:11:06.078041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.580 [2024-10-28 05:11:06.078272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.580 [2024-10-28 05:11:06.078301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.580 [2024-10-28 05:11:06.090482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.580 [2024-10-28 05:11:06.090725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.580 [2024-10-28 05:11:06.090760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.580 [2024-10-28 05:11:06.103867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.580 [2024-10-28 05:11:06.104165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.580 [2024-10-28 05:11:06.104193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.580 [2024-10-28 05:11:06.116377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.580 [2024-10-28 05:11:06.116607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.580 [2024-10-28 05:11:06.116642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.580 [2024-10-28 05:11:06.128493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.580 [2024-10-28 05:11:06.128743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.580 [2024-10-28 05:11:06.128772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.580 [2024-10-28 05:11:06.141360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.580 [2024-10-28 05:11:06.141669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.580 [2024-10-28 05:11:06.141698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.580 [2024-10-28 05:11:06.154229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.580 [2024-10-28 05:11:06.154570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.580 [2024-10-28 05:11:06.154656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.580 [2024-10-28 05:11:06.166839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.580 [2024-10-28 05:11:06.167143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.580 [2024-10-28 05:11:06.167171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.180328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.180623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.180682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.193556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.193803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.193833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.207206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.207513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.207542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.220133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.220415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.220445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.233783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.234064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.234093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.246406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.246653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.246683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.259481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.259782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.259811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.272201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.272445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.272475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.284759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.284997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.285026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.297344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.297583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.297613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.310059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.310400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.310454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.322748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.323104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.323161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.335622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.335867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.335897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.348660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.348900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.348929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.361498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.361744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.361773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.374251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.374566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.374595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.386797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.387149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.387210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.399532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.839 [2024-10-28 05:11:06.399808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.839 [2024-10-28 05:11:06.399838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.839 [2024-10-28 05:11:06.412365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.840 [2024-10-28 05:11:06.412691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.840 [2024-10-28 05:11:06.412753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.840 [2024-10-28 05:11:06.425435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:15.840 [2024-10-28 05:11:06.425733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.840 [2024-10-28 05:11:06.425771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.098 [2024-10-28 05:11:06.438413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:16.098 [2024-10-28 05:11:06.438715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.098 [2024-10-28 05:11:06.438745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.098 [2024-10-28 05:11:06.451519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:16.098 [2024-10-28 05:11:06.451768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.098 [2024-10-28 05:11:06.451797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.098 [2024-10-28 05:11:06.464253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:16.098 [2024-10-28 05:11:06.464578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.098 [2024-10-28 05:11:06.464607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.098 [2024-10-28 05:11:06.477318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:16.098 [2024-10-28 05:11:06.477621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.098 [2024-10-28 05:11:06.477689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.098 [2024-10-28 05:11:06.490037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:16.098 [2024-10-28 05:11:06.490303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.098 [2024-10-28 05:11:06.490332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.098 [2024-10-28 05:11:06.503045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:16.098 [2024-10-28 05:11:06.503353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.098 [2024-10-28 05:11:06.503427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.098 [2024-10-28 05:11:06.515928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:16.098 [2024-10-28 05:11:06.516167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.098 [2024-10-28 05:11:06.516197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.098 [2024-10-28 05:11:06.528605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:16.098 [2024-10-28 05:11:06.528852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.098 [2024-10-28 05:11:06.528882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.098 [2024-10-28 05:11:06.541354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196b850) with pdu=0x2000166fda78 00:35:16.098 [2024-10-28 05:11:06.541598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.098 [2024-10-28 05:11:06.541627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.098 19896.50 IOPS, 77.72 MiB/s 00:35:16.098 Latency(us) 00:35:16.098 [2024-10-28T04:11:06.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.099 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:16.099 nvme0n1 : 2.01 19898.20 77.73 0.00 0.00 6419.31 4987.89 14112.08 00:35:16.099 [2024-10-28T04:11:06.695Z] =================================================================================================================== 00:35:16.099 [2024-10-28T04:11:06.695Z] Total : 19898.20 77.73 0.00 0.00 6419.31 4987.89 14112.08 00:35:16.099 { 00:35:16.099 "results": [ 00:35:16.099 { 00:35:16.099 "job": "nvme0n1", 00:35:16.099 "core_mask": "0x2", 00:35:16.099 "workload": "randwrite", 00:35:16.099 "status": "finished", 00:35:16.099 "queue_depth": 128, 00:35:16.099 "io_size": 4096, 00:35:16.099 "runtime": 2.006262, 00:35:16.099 "iops": 19898.198739745854, 00:35:16.099 "mibps": 77.72733882713224, 00:35:16.099 "io_failed": 0, 00:35:16.099 "io_timeout": 0, 00:35:16.099 "avg_latency_us": 6419.309977115699, 00:35:16.099 "min_latency_us": 4987.889363281975, 00:35:16.099 "max_latency_us": 14112.077222944125 00:35:16.099 } 00:35:16.099 ], 00:35:16.099 "core_count": 1 00:35:16.099 } 00:35:16.099 05:11:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:16.099 05:11:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:16.099 05:11:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:16.099 | .driver_specific 00:35:16.099 | .nvme_error 00:35:16.099 | .status_code 00:35:16.099 | .command_transient_transport_error' 00:35:16.099 05:11:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:16.357 05:11:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 156 > 0 )) 00:35:16.357 05:11:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2473782 00:35:16.357 05:11:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2473782 ']' 00:35:16.357 05:11:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2473782 00:35:16.357 05:11:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:16.357 05:11:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:16.357 05:11:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2473782 00:35:16.357 05:11:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:16.357 05:11:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:16.357 05:11:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2473782' 00:35:16.357 killing process with pid 2473782 00:35:16.357 05:11:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2473782 00:35:16.357 Received shutdown signal, test time was about 2.000000 seconds 00:35:16.357 00:35:16.357 Latency(us) 00:35:16.357 [2024-10-28T04:11:06.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.357 [2024-10-28T04:11:06.953Z] =================================================================================================================== 00:35:16.357 [2024-10-28T04:11:06.953Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:16.357 05:11:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2473782 00:35:16.615 05:11:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:16.615 05:11:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:16.615 05:11:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:16.615 05:11:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:16.615 05:11:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:16.615 05:11:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2474246 00:35:16.615 05:11:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:16.615 05:11:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2474246 /var/tmp/bperf.sock 00:35:16.615 05:11:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2474246 ']' 00:35:16.615 05:11:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:16.615 05:11:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:16.615 05:11:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:16.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:16.615 05:11:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:16.615 05:11:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:16.615 [2024-10-28 05:11:07.133877] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:35:16.615 [2024-10-28 05:11:07.133982] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474246 ] 00:35:16.615 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:16.615 Zero copy mechanism will not be used. 00:35:16.874 [2024-10-28 05:11:07.271800] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:16.874 [2024-10-28 05:11:07.308852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.874 [2024-10-28 05:11:07.358808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:17.809 05:11:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:17.809 05:11:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:17.809 05:11:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:17.809 05:11:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:18.067 05:11:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:18.067 05:11:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.067 05:11:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.067 05:11:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.067 05:11:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:18.067 05:11:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:18.325 nvme0n1 00:35:18.587 05:11:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:18.587 05:11:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.587 05:11:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.587 05:11:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.587 05:11:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:18.587 05:11:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:18.587 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:18.587 Zero copy mechanism will not be used. 00:35:18.587 Running I/O for 2 seconds... 00:35:18.587 [2024-10-28 05:11:09.073420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.587 [2024-10-28 05:11:09.073777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.587 [2024-10-28 05:11:09.073818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.587 [2024-10-28 05:11:09.082648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.587 [2024-10-28 05:11:09.082963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.587 [2024-10-28 05:11:09.082995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.587 [2024-10-28 05:11:09.091834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.587 [2024-10-28 05:11:09.092165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.587 [2024-10-28 05:11:09.092196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.587 [2024-10-28 05:11:09.100792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.587 [2024-10-28 05:11:09.101108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.587 [2024-10-28 05:11:09.101138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.587 [2024-10-28 05:11:09.109942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.587 [2024-10-28 05:11:09.110270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.587 [2024-10-28 05:11:09.110301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.587 [2024-10-28 05:11:09.118911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.587 [2024-10-28 05:11:09.119237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.587 [2024-10-28 05:11:09.119281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.587 [2024-10-28 05:11:09.127993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.587 [2024-10-28 05:11:09.128360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.587 [2024-10-28 05:11:09.128405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.587 [2024-10-28 05:11:09.137389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.587 [2024-10-28 05:11:09.137532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.587 [2024-10-28 05:11:09.137566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.587 [2024-10-28 05:11:09.146137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.587 [2024-10-28 05:11:09.146546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.587 [2024-10-28 05:11:09.146576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.587 [2024-10-28 05:11:09.154851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.587 [2024-10-28 05:11:09.155253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.587 [2024-10-28 05:11:09.155283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.587 [2024-10-28 05:11:09.163387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.587 [2024-10-28 05:11:09.163804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.587 [2024-10-28 05:11:09.163835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.587 [2024-10-28 05:11:09.171302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.587 [2024-10-28 05:11:09.171646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.587 [2024-10-28 05:11:09.171677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.587 [2024-10-28 05:11:09.179887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.587 [2024-10-28 05:11:09.180333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.587 [2024-10-28 05:11:09.180364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.847 [2024-10-28 05:11:09.188671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.847 [2024-10-28 05:11:09.189019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.847 [2024-10-28 05:11:09.189050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.847 [2024-10-28 05:11:09.197058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.847 [2024-10-28 05:11:09.197421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.847 [2024-10-28 05:11:09.197451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.847 [2024-10-28 05:11:09.204114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.847 [2024-10-28 05:11:09.204411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.847 [2024-10-28 05:11:09.204442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.847 [2024-10-28 05:11:09.210932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.847 [2024-10-28 05:11:09.211299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.847 [2024-10-28 05:11:09.211329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.847 [2024-10-28 05:11:09.218300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.847 [2024-10-28 05:11:09.218643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.847 [2024-10-28 05:11:09.218674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.847 [2024-10-28 05:11:09.225466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.847 [2024-10-28 05:11:09.225873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.847 [2024-10-28 05:11:09.225904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.847 [2024-10-28 05:11:09.232912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.847 [2024-10-28 05:11:09.233210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.847 [2024-10-28 05:11:09.233242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.847 [2024-10-28 05:11:09.240037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.847 [2024-10-28 05:11:09.240333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.847 [2024-10-28 05:11:09.240362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.847 [2024-10-28 05:11:09.247268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.847 [2024-10-28 05:11:09.247575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.847 [2024-10-28 05:11:09.247606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.847 [2024-10-28 05:11:09.255070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.847 [2024-10-28 05:11:09.255422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.847 [2024-10-28 05:11:09.255452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.847 [2024-10-28 05:11:09.261821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.847 [2024-10-28 05:11:09.262146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.847 [2024-10-28 05:11:09.262184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.847 [2024-10-28 05:11:09.268714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.847 [2024-10-28 05:11:09.269038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.847 [2024-10-28 05:11:09.269069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.847 [2024-10-28 05:11:09.275411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.847 [2024-10-28 05:11:09.275715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.847 [2024-10-28 05:11:09.275745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.847 [2024-10-28 05:11:09.282456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.847 [2024-10-28 05:11:09.282797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.847 [2024-10-28 05:11:09.282828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.847 [2024-10-28 05:11:09.289673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.847 [2024-10-28 05:11:09.290012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.290042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.296950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.297246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.297276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.303618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.303923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.303953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.310662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.310957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.310987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.317861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.318156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.318186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.325313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.325669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.325699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.331984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.332287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.332324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.339065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.339390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.339419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.345892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.346186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.346216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.352484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.352791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.352821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.359469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.359736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.359767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.366367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.366685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.366716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.373070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.373372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.373403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.379538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.379799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.379835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.386324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.386576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.386607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.392866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.393172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.393201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.399866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.400125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.400155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.406962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.407236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.407267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.413844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.414108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.414153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.420912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.421182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.421212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.427666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.427949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.427979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.848 [2024-10-28 05:11:09.434868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:18.848 [2024-10-28 05:11:09.435152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.848 [2024-10-28 05:11:09.435183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.108 [2024-10-28 05:11:09.442197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.108 [2024-10-28 05:11:09.442517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.108 [2024-10-28 05:11:09.442547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.108 [2024-10-28 05:11:09.449098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.108 [2024-10-28 05:11:09.449394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.108 [2024-10-28 05:11:09.449424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.108 [2024-10-28 05:11:09.455873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.108 [2024-10-28 05:11:09.456208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.108 [2024-10-28 05:11:09.456238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.108 [2024-10-28 05:11:09.463212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.108 [2024-10-28 05:11:09.463492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.108 [2024-10-28 05:11:09.463521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.108 [2024-10-28 05:11:09.470290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.108 [2024-10-28 05:11:09.470582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.108 [2024-10-28 05:11:09.470613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.108 [2024-10-28 05:11:09.476953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.108 [2024-10-28 05:11:09.477205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.108 [2024-10-28 05:11:09.477250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.108 [2024-10-28 05:11:09.483768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.108 [2024-10-28 05:11:09.484028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.108 [2024-10-28 05:11:09.484058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.108 [2024-10-28 05:11:09.490870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.108 [2024-10-28 05:11:09.491117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.108 [2024-10-28 05:11:09.491147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.108 [2024-10-28 05:11:09.497770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.108 [2024-10-28 05:11:09.498039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.108 [2024-10-28 05:11:09.498068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.108 [2024-10-28 05:11:09.504411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.108 [2024-10-28 05:11:09.504671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.108 [2024-10-28 05:11:09.504701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.108 [2024-10-28 05:11:09.511299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.108 [2024-10-28 05:11:09.511572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.108 [2024-10-28 05:11:09.511602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.108 [2024-10-28 05:11:09.517913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.108 [2024-10-28 05:11:09.518165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.108 [2024-10-28 05:11:09.518195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.108 [2024-10-28 05:11:09.524605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.108 [2024-10-28 05:11:09.524904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.108 [2024-10-28 05:11:09.524934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.108 [2024-10-28 05:11:09.531289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.108 [2024-10-28 05:11:09.531569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.108 [2024-10-28 05:11:09.531599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.108 [2024-10-28 05:11:09.538059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.108 [2024-10-28 05:11:09.538352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.538382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.545208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.545461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.545491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.552202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.552483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.552514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.558871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.559149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.559186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.565525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.565803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.565833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.572363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.572613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.572650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.579432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.579711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.579741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.586136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.586389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.586419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.592460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.592720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.592750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.598646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.598899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.598929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.604814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.605081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.605110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.611569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.611839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.611869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.618331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.618620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.618661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.625079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.625347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.625377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.631833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.632085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.632115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.638484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.638742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.638771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.645484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.645742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.645772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.652490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.652777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.652807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.658982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.659266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.659295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.665945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.666199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.666230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.673329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.673580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.673610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.680060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.680327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.680371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.686577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.686868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.686899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.693313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.693564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.693593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.109 [2024-10-28 05:11:09.700038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.109 [2024-10-28 05:11:09.700293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.109 [2024-10-28 05:11:09.700323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.368 [2024-10-28 05:11:09.707229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.368 [2024-10-28 05:11:09.707569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.368 [2024-10-28 05:11:09.707599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.368 [2024-10-28 05:11:09.714011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.368 [2024-10-28 05:11:09.714263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.368 [2024-10-28 05:11:09.714293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.368 [2024-10-28 05:11:09.720572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.368 [2024-10-28 05:11:09.720873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.368 [2024-10-28 05:11:09.720903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.368 [2024-10-28 05:11:09.727569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.368 [2024-10-28 05:11:09.727939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.368 [2024-10-28 05:11:09.727970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.368 [2024-10-28 05:11:09.734414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.368 [2024-10-28 05:11:09.734675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.368 [2024-10-28 05:11:09.734712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.368 [2024-10-28 05:11:09.741172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.368 [2024-10-28 05:11:09.741421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.368 [2024-10-28 05:11:09.741451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.368 [2024-10-28 05:11:09.748320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.368 [2024-10-28 05:11:09.748592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.368 [2024-10-28 05:11:09.748621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.368 [2024-10-28 05:11:09.755232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.368 [2024-10-28 05:11:09.755479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.368 [2024-10-28 05:11:09.755509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.368 [2024-10-28 05:11:09.762114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.368 [2024-10-28 05:11:09.762388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.368 [2024-10-28 05:11:09.762418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.368 [2024-10-28 05:11:09.769331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.368 [2024-10-28 05:11:09.769608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.368 [2024-10-28 05:11:09.769646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.368 [2024-10-28 05:11:09.776046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.368 [2024-10-28 05:11:09.776307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.368 [2024-10-28 05:11:09.776337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.368 [2024-10-28 05:11:09.782752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.368 [2024-10-28 05:11:09.783004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.368 [2024-10-28 05:11:09.783034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.368 [2024-10-28 05:11:09.789491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.368 [2024-10-28 05:11:09.789766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.368 [2024-10-28 05:11:09.789805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.368 [2024-10-28 05:11:09.796311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.368 [2024-10-28 05:11:09.796578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.368 [2024-10-28 05:11:09.796607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.368 [2024-10-28 05:11:09.803367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.803610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.803647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.809821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.810065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.810094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.816699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.816966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.816995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.823285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.823575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.823605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.829622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.829903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.829933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.836825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.837107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.837138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.844004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.844247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.844276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.851000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.851271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.851306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.857912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.858178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.858206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.865031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.865273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.865302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.871890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.872155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.872185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.878570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.878875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.878904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.885120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.885367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.885396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.892059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.892320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.892350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.898624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.898930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.898975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.905597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.905874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.905903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.912762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.913051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.913080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.919285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.919582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.919611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.926539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.926809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.926838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.932432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.932763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.932791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.938979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.939246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.939289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.945429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.945788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.945817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.953143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.953412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.953440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.369 [2024-10-28 05:11:09.960283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.369 [2024-10-28 05:11:09.960614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.369 [2024-10-28 05:11:09.960651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.628 [2024-10-28 05:11:09.967373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.628 [2024-10-28 05:11:09.967577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.628 [2024-10-28 05:11:09.967605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.628 [2024-10-28 05:11:09.974039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.628 [2024-10-28 05:11:09.974295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.628 [2024-10-28 05:11:09.974323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.628 [2024-10-28 05:11:09.980942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.628 [2024-10-28 05:11:09.981210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.628 [2024-10-28 05:11:09.981238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.628 [2024-10-28 05:11:09.987520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.628 [2024-10-28 05:11:09.987817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.628 [2024-10-28 05:11:09.987847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.628 [2024-10-28 05:11:09.994481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.628 [2024-10-28 05:11:09.994753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.628 [2024-10-28 05:11:09.994783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.628 [2024-10-28 05:11:10.001252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.628 [2024-10-28 05:11:10.001540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.628 [2024-10-28 05:11:10.001570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.628 [2024-10-28 05:11:10.008776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.628 [2024-10-28 05:11:10.009074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.009121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.015343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.015595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.015625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.022289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.022590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.022620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.029729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.030015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.030055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.036841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.037145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.037175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.044147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.044471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.044502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.051148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.051393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.051423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.058388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.058642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.058671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.629 4355.00 IOPS, 544.38 MiB/s [2024-10-28T04:11:10.225Z] [2024-10-28 05:11:10.066507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.066647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.066677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.074136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.074288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.074330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.082056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.082240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.082269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.090419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.090711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.090742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.099580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.099810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.099840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.109235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.109473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.109502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.118338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.118545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.118573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.127824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.128048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.128075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.136991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.137235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.137264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.145138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.145342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.145369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.153966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.154103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.154135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.162003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.162223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.162251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.171051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.171213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.171240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.179955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.180069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.180097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.188267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.188465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.188493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.629 [2024-10-28 05:11:10.196484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.629 [2024-10-28 05:11:10.196617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.629 [2024-10-28 05:11:10.196654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.630 [2024-10-28 05:11:10.205162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.630 [2024-10-28 05:11:10.205351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.630 [2024-10-28 05:11:10.205379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.630 [2024-10-28 05:11:10.214356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.630 [2024-10-28 05:11:10.214558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.630 [2024-10-28 05:11:10.214601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.222750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.889 [2024-10-28 05:11:10.222937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.889 [2024-10-28 05:11:10.222970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.231805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.889 [2024-10-28 05:11:10.231995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.889 [2024-10-28 05:11:10.232022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.240838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.889 [2024-10-28 05:11:10.240990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.889 [2024-10-28 05:11:10.241022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.248707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.889 [2024-10-28 05:11:10.248865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.889 [2024-10-28 05:11:10.248898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.256792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.889 [2024-10-28 05:11:10.256991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.889 [2024-10-28 05:11:10.257020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.264613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.889 [2024-10-28 05:11:10.264777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.889 [2024-10-28 05:11:10.264806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.271999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.889 [2024-10-28 05:11:10.272128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.889 [2024-10-28 05:11:10.272159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.278953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.889 [2024-10-28 05:11:10.279074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.889 [2024-10-28 05:11:10.279102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.285566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.889 [2024-10-28 05:11:10.285714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.889 [2024-10-28 05:11:10.285742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.292887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.889 [2024-10-28 05:11:10.293005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.889 [2024-10-28 05:11:10.293034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.300233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.889 [2024-10-28 05:11:10.300395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.889 [2024-10-28 05:11:10.300423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.307317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.889 [2024-10-28 05:11:10.307467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.889 [2024-10-28 05:11:10.307495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.314546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.889 [2024-10-28 05:11:10.314665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.889 [2024-10-28 05:11:10.314693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.321968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.889 [2024-10-28 05:11:10.322089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.889 [2024-10-28 05:11:10.322117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.328742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.889 [2024-10-28 05:11:10.328946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.889 [2024-10-28 05:11:10.328975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.335793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.889 [2024-10-28 05:11:10.335919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.889 [2024-10-28 05:11:10.335947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.889 [2024-10-28 05:11:10.343589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.343769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.343799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.350832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.350980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.351008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.357939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.358095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.358124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.365181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.365303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.365331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.372664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.372841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.372881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.380074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.380196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.380224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.387592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.387703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.387734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.395178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.395334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.395361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.402536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.402756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.402789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.410074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.410235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.410263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.418265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.418528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.418556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.426323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.426592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.426620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.435309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.435520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.435549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.443696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.443897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.443926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.451091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.451233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.451261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.458405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.458558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.458586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.465689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.465882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.465910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.473579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.473727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.473757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.890 [2024-10-28 05:11:10.481859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:19.890 [2024-10-28 05:11:10.481957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.890 [2024-10-28 05:11:10.481987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.150 [2024-10-28 05:11:10.489879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.150 [2024-10-28 05:11:10.490088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.150 [2024-10-28 05:11:10.490117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.150 [2024-10-28 05:11:10.496922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.150 [2024-10-28 05:11:10.497030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.150 [2024-10-28 05:11:10.497058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.150 [2024-10-28 05:11:10.503600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.150 [2024-10-28 05:11:10.503704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.150 [2024-10-28 05:11:10.503733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.150 [2024-10-28 05:11:10.510805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.150 [2024-10-28 05:11:10.510955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.150 [2024-10-28 05:11:10.510984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.150 [2024-10-28 05:11:10.518237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.150 [2024-10-28 05:11:10.518371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.150 [2024-10-28 05:11:10.518400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.150 [2024-10-28 05:11:10.525882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.150 [2024-10-28 05:11:10.526033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.150 [2024-10-28 05:11:10.526061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.150 [2024-10-28 05:11:10.533183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.150 [2024-10-28 05:11:10.533335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.150 [2024-10-28 05:11:10.533364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.150 [2024-10-28 05:11:10.540713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.540863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.540895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.547594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.547816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.547845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.555659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.555799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.555828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.563477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.563642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.563671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.571013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.571205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.571238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.578460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.578629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.578669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.586738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.586946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.586977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.593976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.594070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.594098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.601600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.601739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.601772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.609460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.609643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.609673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.617225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.617366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.617395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.624797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.624913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.624962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.632675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.632892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.632937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.640334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.640464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.640492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.647530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.647647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.647675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.654551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.654722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.654751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.662115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.662282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.662313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.670304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.670435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.670463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.677811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.677954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.677984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.684941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.685117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.685160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.692516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.692715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.692744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.700448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.700607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.700658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.708894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.709095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.151 [2024-10-28 05:11:10.709139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.151 [2024-10-28 05:11:10.717481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.151 [2024-10-28 05:11:10.717674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.152 [2024-10-28 05:11:10.717704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.152 [2024-10-28 05:11:10.726887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.152 [2024-10-28 05:11:10.727068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.152 [2024-10-28 05:11:10.727096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.152 [2024-10-28 05:11:10.736486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.152 [2024-10-28 05:11:10.736671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.152 [2024-10-28 05:11:10.736701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.424 [2024-10-28 05:11:10.745587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.424 [2024-10-28 05:11:10.745876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.424 [2024-10-28 05:11:10.745906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.424 [2024-10-28 05:11:10.752893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.424 [2024-10-28 05:11:10.752994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.424 [2024-10-28 05:11:10.753024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.424 [2024-10-28 05:11:10.760139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.424 [2024-10-28 05:11:10.760236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.760264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.767059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.767157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.767191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.774893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.775055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.775092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.783037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.783220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.783248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.791796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.791956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.791985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.800877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.801048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.801080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.809471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.809739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.809769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.819323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.819579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.819608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.828383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.828583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.828611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.837321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.837534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.837564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.846157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.846383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.846411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.855496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.855793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.855823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.864540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.864798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.864828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.874492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.874662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.874696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.883305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.883483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.883511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.892126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.892332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.892361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.900263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.900447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.900475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.908293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.908484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.908516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.916330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.916527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.916558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.923935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.924097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.924128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.931294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.931435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.931463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.938591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.938770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.938804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.945949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.425 [2024-10-28 05:11:10.946107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.425 [2024-10-28 05:11:10.946135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.425 [2024-10-28 05:11:10.953625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.426 [2024-10-28 05:11:10.953820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.426 [2024-10-28 05:11:10.953849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.426 [2024-10-28 05:11:10.961565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.426 [2024-10-28 05:11:10.961734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.426 [2024-10-28 05:11:10.961763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.426 [2024-10-28 05:11:10.969123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.426 [2024-10-28 05:11:10.969302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.426 [2024-10-28 05:11:10.969330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.426 [2024-10-28 05:11:10.976829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.426 [2024-10-28 05:11:10.977064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.426 [2024-10-28 05:11:10.977092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.426 [2024-10-28 05:11:10.984230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.426 [2024-10-28 05:11:10.984327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.426 [2024-10-28 05:11:10.984355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.426 [2024-10-28 05:11:10.991473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.426 [2024-10-28 05:11:10.991675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.426 [2024-10-28 05:11:10.991708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.426 [2024-10-28 05:11:10.999010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.426 [2024-10-28 05:11:10.999173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.426 [2024-10-28 05:11:10.999201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.426 [2024-10-28 05:11:11.006431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.426 [2024-10-28 05:11:11.006555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.426 [2024-10-28 05:11:11.006583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.426 [2024-10-28 05:11:11.014121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.426 [2024-10-28 05:11:11.014267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.426 [2024-10-28 05:11:11.014297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.685 [2024-10-28 05:11:11.021707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.685 [2024-10-28 05:11:11.021905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.685 [2024-10-28 05:11:11.021934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.685 [2024-10-28 05:11:11.029173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.685 [2024-10-28 05:11:11.029322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.685 [2024-10-28 05:11:11.029354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.685 [2024-10-28 05:11:11.036226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.685 [2024-10-28 05:11:11.036354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.685 [2024-10-28 05:11:11.036384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.685 [2024-10-28 05:11:11.043654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.685 [2024-10-28 05:11:11.043863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.685 [2024-10-28 05:11:11.043891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.685 [2024-10-28 05:11:11.051286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.685 [2024-10-28 05:11:11.051483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.685 [2024-10-28 05:11:11.051511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.685 [2024-10-28 05:11:11.059264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x196bb90) with pdu=0x2000166fef90 00:35:20.685 [2024-10-28 05:11:11.059362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.685 [2024-10-28 05:11:11.059390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.685 4127.50 IOPS, 515.94 MiB/s 00:35:20.685 Latency(us) 00:35:20.685 [2024-10-28T04:11:11.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.685 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:20.685 nvme0n1 : 2.00 4126.78 515.85 0.00 0.00 3868.60 2700.76 9683.80 00:35:20.685 [2024-10-28T04:11:11.281Z] =================================================================================================================== 00:35:20.685 [2024-10-28T04:11:11.281Z] Total : 4126.78 515.85 0.00 0.00 3868.60 2700.76 9683.80 00:35:20.685 { 00:35:20.685 "results": [ 00:35:20.685 { 00:35:20.685 "job": "nvme0n1", 00:35:20.685 "core_mask": "0x2", 00:35:20.685 "workload": "randwrite", 00:35:20.685 "status": "finished", 00:35:20.685 "queue_depth": 16, 00:35:20.685 "io_size": 131072, 00:35:20.685 "runtime": 2.004955, 00:35:20.685 "iops": 4126.775912676344, 00:35:20.685 "mibps": 515.846989084543, 00:35:20.685 "io_failed": 0, 00:35:20.685 "io_timeout": 0, 00:35:20.685 "avg_latency_us": 3868.5975741113743, 00:35:20.685 "min_latency_us": 2700.759606459996, 00:35:20.685 "max_latency_us": 9683.80471505476 00:35:20.685 } 00:35:20.685 ], 00:35:20.685 "core_count": 1 00:35:20.685 } 00:35:20.685 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:20.685 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:20.685 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:20.685 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:20.685 | .driver_specific 00:35:20.685 | .nvme_error 00:35:20.685 | .status_code 00:35:20.685 | .command_transient_transport_error' 00:35:20.943 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 266 > 0 )) 00:35:20.943 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2474246 00:35:20.943 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2474246 ']' 00:35:20.943 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2474246 00:35:20.943 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:20.943 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:20.944 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2474246 00:35:20.944 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:20.944 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:20.944 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2474246' 00:35:20.944 killing process with pid 2474246 00:35:20.944 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2474246 00:35:20.944 Received shutdown signal, test time was about 2.000000 seconds 00:35:20.944 00:35:20.944 Latency(us) 00:35:20.944 [2024-10-28T04:11:11.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.944 [2024-10-28T04:11:11.540Z] =================================================================================================================== 00:35:20.944 [2024-10-28T04:11:11.540Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:20.944 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2474246 00:35:21.202 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2472600 00:35:21.202 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2472600 ']' 00:35:21.202 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2472600 00:35:21.202 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:21.202 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:21.202 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2472600 00:35:21.202 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:21.202 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:21.202 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2472600' 00:35:21.202 killing process with pid 2472600 00:35:21.202 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2472600 00:35:21.202 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2472600 00:35:21.461 00:35:21.461 real 0m18.666s 00:35:21.461 user 0m37.372s 00:35:21.461 sys 0m4.338s 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:21.461 ************************************ 00:35:21.461 END TEST nvmf_digest_error 00:35:21.461 ************************************ 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:21.461 rmmod nvme_tcp 00:35:21.461 rmmod nvme_fabrics 00:35:21.461 rmmod nvme_keyring 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 2472600 ']' 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 2472600 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2472600 ']' 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2472600 00:35:21.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2472600) - No such process 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2472600 is not found' 00:35:21.461 Process with pid 2472600 is not found 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.461 05:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:23.366 05:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:23.366 00:35:23.366 real 0m41.861s 00:35:23.366 user 1m16.005s 00:35:23.366 sys 0m10.035s 00:35:23.366 05:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:23.366 05:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:23.366 ************************************ 00:35:23.366 END TEST nvmf_digest 00:35:23.366 ************************************ 00:35:23.624 05:11:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:23.624 05:11:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:23.624 05:11:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:23.624 05:11:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:23.624 05:11:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:23.624 05:11:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:23.624 05:11:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.624 ************************************ 00:35:23.624 START TEST nvmf_bdevperf 00:35:23.624 ************************************ 00:35:23.624 05:11:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:23.624 * Looking for test storage... 00:35:23.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1689 -- # lcov --version 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:23.624 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:35:23.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.625 --rc genhtml_branch_coverage=1 00:35:23.625 --rc genhtml_function_coverage=1 00:35:23.625 --rc genhtml_legend=1 00:35:23.625 --rc geninfo_all_blocks=1 00:35:23.625 --rc geninfo_unexecuted_blocks=1 00:35:23.625 00:35:23.625 ' 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:35:23.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.625 --rc genhtml_branch_coverage=1 00:35:23.625 --rc genhtml_function_coverage=1 00:35:23.625 --rc genhtml_legend=1 00:35:23.625 --rc geninfo_all_blocks=1 00:35:23.625 --rc geninfo_unexecuted_blocks=1 00:35:23.625 00:35:23.625 ' 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:35:23.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.625 --rc genhtml_branch_coverage=1 00:35:23.625 --rc genhtml_function_coverage=1 00:35:23.625 --rc genhtml_legend=1 00:35:23.625 --rc geninfo_all_blocks=1 00:35:23.625 --rc geninfo_unexecuted_blocks=1 00:35:23.625 00:35:23.625 ' 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:35:23.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.625 --rc genhtml_branch_coverage=1 00:35:23.625 --rc genhtml_function_coverage=1 00:35:23.625 --rc genhtml_legend=1 00:35:23.625 --rc geninfo_all_blocks=1 00:35:23.625 --rc geninfo_unexecuted_blocks=1 00:35:23.625 00:35:23.625 ' 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:23.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:23.625 05:11:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:25.526 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:25.526 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:25.526 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:25.526 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:25.527 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:25.527 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:25.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:25.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:35:25.787 00:35:25.787 --- 10.0.0.2 ping statistics --- 00:35:25.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.787 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:25.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:25.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:35:25.787 00:35:25.787 --- 10.0.0.1 ping statistics --- 00:35:25.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.787 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=2476676 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 2476676 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2476676 ']' 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:25.787 05:11:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:25.787 [2024-10-28 05:11:16.284125] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:35:25.787 [2024-10-28 05:11:16.284214] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:26.045 [2024-10-28 05:11:16.423690] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:26.045 [2024-10-28 05:11:16.464806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:26.045 [2024-10-28 05:11:16.514157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:26.045 [2024-10-28 05:11:16.514233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:26.045 [2024-10-28 05:11:16.514249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:26.046 [2024-10-28 05:11:16.514262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:26.046 [2024-10-28 05:11:16.514273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:26.046 [2024-10-28 05:11:16.515938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:26.046 [2024-10-28 05:11:16.515980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:26.046 [2024-10-28 05:11:16.515983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:26.979 [2024-10-28 05:11:17.355654] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:26.979 Malloc0 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:26.979 [2024-10-28 05:11:17.419334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:26.979 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:26.979 { 00:35:26.979 "params": { 00:35:26.979 "name": "Nvme$subsystem", 00:35:26.979 "trtype": "$TEST_TRANSPORT", 00:35:26.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:26.980 "adrfam": "ipv4", 00:35:26.980 "trsvcid": "$NVMF_PORT", 00:35:26.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:26.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:26.980 "hdgst": ${hdgst:-false}, 00:35:26.980 "ddgst": ${ddgst:-false} 00:35:26.980 }, 00:35:26.980 "method": "bdev_nvme_attach_controller" 00:35:26.980 } 00:35:26.980 EOF 00:35:26.980 )") 00:35:26.980 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:35:26.980 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:35:26.980 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:35:26.980 05:11:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:26.980 "params": { 00:35:26.980 "name": "Nvme1", 00:35:26.980 "trtype": "tcp", 00:35:26.980 "traddr": "10.0.0.2", 00:35:26.980 "adrfam": "ipv4", 00:35:26.980 "trsvcid": "4420", 00:35:26.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:26.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:26.980 "hdgst": false, 00:35:26.980 "ddgst": false 00:35:26.980 }, 00:35:26.980 "method": "bdev_nvme_attach_controller" 00:35:26.980 }' 00:35:26.980 [2024-10-28 05:11:17.474260] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:35:26.980 [2024-10-28 05:11:17.474344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2476875 ] 00:35:27.238 [2024-10-28 05:11:17.610876] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:27.238 [2024-10-28 05:11:17.649452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.238 [2024-10-28 05:11:17.699569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.496 Running I/O for 1 seconds... 00:35:28.430 8379.00 IOPS, 32.73 MiB/s 00:35:28.430 Latency(us) 00:35:28.430 [2024-10-28T04:11:19.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.430 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:28.430 Verification LBA range: start 0x0 length 0x4000 00:35:28.430 Nvme1n1 : 1.00 8466.83 33.07 0.00 0.00 15054.64 1538.95 12944.18 00:35:28.430 [2024-10-28T04:11:19.026Z] =================================================================================================================== 00:35:28.430 [2024-10-28T04:11:19.026Z] Total : 8466.83 33.07 0.00 0.00 15054.64 1538.95 12944.18 00:35:28.688 05:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2477062 00:35:28.689 05:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:28.689 05:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:28.689 05:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:28.689 05:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:35:28.689 05:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:35:28.689 05:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:28.689 05:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:28.689 { 00:35:28.689 "params": { 00:35:28.689 "name": "Nvme$subsystem", 00:35:28.689 "trtype": "$TEST_TRANSPORT", 00:35:28.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:28.689 "adrfam": "ipv4", 00:35:28.689 "trsvcid": "$NVMF_PORT", 00:35:28.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:28.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:28.689 "hdgst": ${hdgst:-false}, 00:35:28.689 "ddgst": ${ddgst:-false} 00:35:28.689 }, 00:35:28.689 "method": "bdev_nvme_attach_controller" 00:35:28.689 } 00:35:28.689 EOF 00:35:28.689 )") 00:35:28.689 05:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:35:28.689 05:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:35:28.689 05:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:35:28.689 05:11:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:28.689 "params": { 00:35:28.689 "name": "Nvme1", 00:35:28.689 "trtype": "tcp", 00:35:28.689 "traddr": "10.0.0.2", 00:35:28.689 "adrfam": "ipv4", 00:35:28.689 "trsvcid": "4420", 00:35:28.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:28.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:28.689 "hdgst": false, 00:35:28.689 "ddgst": false 00:35:28.689 }, 00:35:28.689 "method": "bdev_nvme_attach_controller" 00:35:28.689 }' 00:35:28.689 [2024-10-28 05:11:19.178106] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:35:28.689 [2024-10-28 05:11:19.178183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477062 ] 00:35:28.947 [2024-10-28 05:11:19.310487] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:28.947 [2024-10-28 05:11:19.347741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.947 [2024-10-28 05:11:19.394505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.206 Running I/O for 15 seconds... 00:35:31.073 8327.00 IOPS, 32.53 MiB/s [2024-10-28T04:11:22.237Z] 8400.00 IOPS, 32.81 MiB/s [2024-10-28T04:11:22.237Z] 05:11:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2476676 00:35:31.641 05:11:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:31.641 [2024-10-28 05:11:22.142616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.641 [2024-10-28 05:11:22.142705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.641 [2024-10-28 05:11:22.142738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.142756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.142783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.142798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.142814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.142829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.142845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.142861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.142877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.142891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.142906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.142937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.142955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.142970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.142987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.143002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.143033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.143067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.143099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.143142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.143174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.143204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.143234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.143265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.143297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.143327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.143357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.143387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.642 [2024-10-28 05:11:22.143417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.642 [2024-10-28 05:11:22.143448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.642 [2024-10-28 05:11:22.143478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.642 [2024-10-28 05:11:22.143511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.642 [2024-10-28 05:11:22.143543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.642 [2024-10-28 05:11:22.143573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.642 [2024-10-28 05:11:22.143604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.642 [2024-10-28 05:11:22.143641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.642 [2024-10-28 05:11:22.143696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.642 [2024-10-28 05:11:22.143726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.642 [2024-10-28 05:11:22.143765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.642 [2024-10-28 05:11:22.143794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.642 [2024-10-28 05:11:22.143823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.642 [2024-10-28 05:11:22.143853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.642 [2024-10-28 05:11:22.143882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.642 [2024-10-28 05:11:22.143911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.143944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.143974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.143989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.144003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.144018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.144032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.144048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.642 [2024-10-28 05:11:22.144062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.642 [2024-10-28 05:11:22.144078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.643 [2024-10-28 05:11:22.144753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.144792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.144820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.144849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.144878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.144907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.144936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.144965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.144980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.144993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.145008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.145022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.145037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.145051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.145069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.145083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.145106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.145121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.145136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.145150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.145165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.145178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.145194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.145207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.145223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.145236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.145252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.145266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.145281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.643 [2024-10-28 05:11:22.145295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.643 [2024-10-28 05:11:22.145310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.145979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.145993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.644 [2024-10-28 05:11:22.146526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.644 [2024-10-28 05:11:22.146539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-10-28 05:11:22.146553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-10-28 05:11:22.146573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-10-28 05:11:22.146589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.645 [2024-10-28 05:11:22.146602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-10-28 05:11:22.146616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc43830 is same with the state(6) to be set 00:35:31.645 [2024-10-28 05:11:22.146653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:31.645 [2024-10-28 05:11:22.146665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:31.645 [2024-10-28 05:11:22.146687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37760 len:8 PRP1 0x0 PRP2 0x0 00:35:31.645 [2024-10-28 05:11:22.146701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:31.645 [2024-10-28 05:11:22.150404] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.645 [2024-10-28 05:11:22.150480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.645 [2024-10-28 05:11:22.151172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.645 [2024-10-28 05:11:22.151204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.645 [2024-10-28 05:11:22.151222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.645 [2024-10-28 05:11:22.151461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.645 [2024-10-28 05:11:22.151728] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.645 [2024-10-28 05:11:22.151751] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.645 [2024-10-28 05:11:22.151767] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.645 [2024-10-28 05:11:22.155363] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.645 [2024-10-28 05:11:22.164630] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.645 [2024-10-28 05:11:22.165125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.645 [2024-10-28 05:11:22.165154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.645 [2024-10-28 05:11:22.165170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.645 [2024-10-28 05:11:22.165404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.645 [2024-10-28 05:11:22.165658] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.645 [2024-10-28 05:11:22.165695] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.645 [2024-10-28 05:11:22.165709] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.645 [2024-10-28 05:11:22.169274] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.645 [2024-10-28 05:11:22.178465] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.645 [2024-10-28 05:11:22.178922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.645 [2024-10-28 05:11:22.178972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.645 [2024-10-28 05:11:22.178989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.645 [2024-10-28 05:11:22.179262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.645 [2024-10-28 05:11:22.179505] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.645 [2024-10-28 05:11:22.179529] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.645 [2024-10-28 05:11:22.179544] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.645 [2024-10-28 05:11:22.183122] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.645 [2024-10-28 05:11:22.192352] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.645 [2024-10-28 05:11:22.192767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.645 [2024-10-28 05:11:22.192810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.645 [2024-10-28 05:11:22.192827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.645 [2024-10-28 05:11:22.193068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.645 [2024-10-28 05:11:22.193325] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.645 [2024-10-28 05:11:22.193348] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.645 [2024-10-28 05:11:22.193364] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.645 [2024-10-28 05:11:22.196936] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.645 [2024-10-28 05:11:22.206168] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.645 [2024-10-28 05:11:22.206561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.645 [2024-10-28 05:11:22.206593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.645 [2024-10-28 05:11:22.206611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.645 [2024-10-28 05:11:22.206859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.645 [2024-10-28 05:11:22.207102] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.645 [2024-10-28 05:11:22.207125] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.645 [2024-10-28 05:11:22.207140] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.645 [2024-10-28 05:11:22.210708] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.645 [2024-10-28 05:11:22.220150] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.645 [2024-10-28 05:11:22.220555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.645 [2024-10-28 05:11:22.220583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.645 [2024-10-28 05:11:22.220599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.645 [2024-10-28 05:11:22.220879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.645 [2024-10-28 05:11:22.221122] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.645 [2024-10-28 05:11:22.221145] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.645 [2024-10-28 05:11:22.221160] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.645 [2024-10-28 05:11:22.224724] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.904 [2024-10-28 05:11:22.233978] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.904 [2024-10-28 05:11:22.234379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.904 [2024-10-28 05:11:22.234407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.904 [2024-10-28 05:11:22.234423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.904 [2024-10-28 05:11:22.234661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.904 [2024-10-28 05:11:22.234904] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.904 [2024-10-28 05:11:22.234927] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.904 [2024-10-28 05:11:22.234943] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.904 [2024-10-28 05:11:22.238498] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.904 [2024-10-28 05:11:22.247942] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.904 [2024-10-28 05:11:22.248353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.904 [2024-10-28 05:11:22.248385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.904 [2024-10-28 05:11:22.248403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.904 [2024-10-28 05:11:22.248651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.904 [2024-10-28 05:11:22.248894] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.904 [2024-10-28 05:11:22.248918] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.904 [2024-10-28 05:11:22.248933] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.904 [2024-10-28 05:11:22.252483] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.904 [2024-10-28 05:11:22.261884] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.904 [2024-10-28 05:11:22.262324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.904 [2024-10-28 05:11:22.262351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.904 [2024-10-28 05:11:22.262381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.904 [2024-10-28 05:11:22.262631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.904 [2024-10-28 05:11:22.262884] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.904 [2024-10-28 05:11:22.262907] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.904 [2024-10-28 05:11:22.262928] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.904 [2024-10-28 05:11:22.266481] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.904 [2024-10-28 05:11:22.275718] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.904 [2024-10-28 05:11:22.276113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.904 [2024-10-28 05:11:22.276145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.904 [2024-10-28 05:11:22.276163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.904 [2024-10-28 05:11:22.276401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.904 [2024-10-28 05:11:22.276655] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.905 [2024-10-28 05:11:22.276679] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.905 [2024-10-28 05:11:22.276694] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.905 [2024-10-28 05:11:22.280247] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.905 [2024-10-28 05:11:22.289688] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.905 [2024-10-28 05:11:22.290117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.905 [2024-10-28 05:11:22.290159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.905 [2024-10-28 05:11:22.290175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.905 [2024-10-28 05:11:22.290423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.905 [2024-10-28 05:11:22.290694] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.905 [2024-10-28 05:11:22.290736] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.905 [2024-10-28 05:11:22.290752] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.905 [2024-10-28 05:11:22.294310] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.905 [2024-10-28 05:11:22.303541] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.905 [2024-10-28 05:11:22.303971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.905 [2024-10-28 05:11:22.304003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.905 [2024-10-28 05:11:22.304020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.905 [2024-10-28 05:11:22.304258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.905 [2024-10-28 05:11:22.304501] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.905 [2024-10-28 05:11:22.304524] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.905 [2024-10-28 05:11:22.304539] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.905 [2024-10-28 05:11:22.308099] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.905 [2024-10-28 05:11:22.317343] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.905 [2024-10-28 05:11:22.317765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.905 [2024-10-28 05:11:22.317794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.905 [2024-10-28 05:11:22.317810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.905 [2024-10-28 05:11:22.318063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.905 [2024-10-28 05:11:22.318305] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.905 [2024-10-28 05:11:22.318329] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.905 [2024-10-28 05:11:22.318344] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.905 [2024-10-28 05:11:22.321909] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.905 [2024-10-28 05:11:22.331158] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.905 [2024-10-28 05:11:22.331579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.905 [2024-10-28 05:11:22.331611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.905 [2024-10-28 05:11:22.331629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.905 [2024-10-28 05:11:22.331878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.905 [2024-10-28 05:11:22.332120] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.905 [2024-10-28 05:11:22.332143] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.905 [2024-10-28 05:11:22.332159] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.905 [2024-10-28 05:11:22.335718] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.905 [2024-10-28 05:11:22.345152] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.905 [2024-10-28 05:11:22.345592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.905 [2024-10-28 05:11:22.345619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.905 [2024-10-28 05:11:22.345660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.905 [2024-10-28 05:11:22.345899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.905 [2024-10-28 05:11:22.346141] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.905 [2024-10-28 05:11:22.346164] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.905 [2024-10-28 05:11:22.346179] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.905 [2024-10-28 05:11:22.349740] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.905 [2024-10-28 05:11:22.358971] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.905 [2024-10-28 05:11:22.359337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.905 [2024-10-28 05:11:22.359369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.905 [2024-10-28 05:11:22.359392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.905 [2024-10-28 05:11:22.359630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.905 [2024-10-28 05:11:22.359884] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.905 [2024-10-28 05:11:22.359908] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.905 [2024-10-28 05:11:22.359923] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.905 [2024-10-28 05:11:22.363475] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.905 [2024-10-28 05:11:22.372915] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.905 [2024-10-28 05:11:22.373324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.905 [2024-10-28 05:11:22.373351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.905 [2024-10-28 05:11:22.373367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.905 [2024-10-28 05:11:22.373602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.905 [2024-10-28 05:11:22.373864] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.905 [2024-10-28 05:11:22.373888] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.905 [2024-10-28 05:11:22.373903] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.905 [2024-10-28 05:11:22.377452] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.905 [2024-10-28 05:11:22.386897] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.905 [2024-10-28 05:11:22.387300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.905 [2024-10-28 05:11:22.387332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.905 [2024-10-28 05:11:22.387350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.905 [2024-10-28 05:11:22.387587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.905 [2024-10-28 05:11:22.387840] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.905 [2024-10-28 05:11:22.387864] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.905 [2024-10-28 05:11:22.387880] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.905 [2024-10-28 05:11:22.391431] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.905 [2024-10-28 05:11:22.400871] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.905 [2024-10-28 05:11:22.401279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.905 [2024-10-28 05:11:22.401310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.905 [2024-10-28 05:11:22.401328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.905 [2024-10-28 05:11:22.401565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.905 [2024-10-28 05:11:22.401824] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.905 [2024-10-28 05:11:22.401849] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.905 [2024-10-28 05:11:22.401865] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.905 [2024-10-28 05:11:22.405490] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.905 [2024-10-28 05:11:22.414896] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.905 [2024-10-28 05:11:22.415308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.905 [2024-10-28 05:11:22.415340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.905 [2024-10-28 05:11:22.415359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.905 [2024-10-28 05:11:22.415597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.905 [2024-10-28 05:11:22.415850] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.905 [2024-10-28 05:11:22.415876] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.905 [2024-10-28 05:11:22.415891] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.905 [2024-10-28 05:11:22.419516] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.905 [2024-10-28 05:11:22.428856] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.905 [2024-10-28 05:11:22.429250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.905 [2024-10-28 05:11:22.429282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.905 [2024-10-28 05:11:22.429300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.905 [2024-10-28 05:11:22.429538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.905 [2024-10-28 05:11:22.429791] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.905 [2024-10-28 05:11:22.429815] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.905 [2024-10-28 05:11:22.429830] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.905 [2024-10-28 05:11:22.433381] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.905 [2024-10-28 05:11:22.442824] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.905 [2024-10-28 05:11:22.443247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.905 [2024-10-28 05:11:22.443279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.905 [2024-10-28 05:11:22.443297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.905 [2024-10-28 05:11:22.443534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.905 [2024-10-28 05:11:22.443790] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.905 [2024-10-28 05:11:22.443815] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.905 [2024-10-28 05:11:22.443836] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.905 [2024-10-28 05:11:22.447390] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.905 [2024-10-28 05:11:22.456622] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.905 [2024-10-28 05:11:22.457022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.905 [2024-10-28 05:11:22.457054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.905 [2024-10-28 05:11:22.457072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.905 [2024-10-28 05:11:22.457309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.905 [2024-10-28 05:11:22.457551] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.905 [2024-10-28 05:11:22.457574] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.905 [2024-10-28 05:11:22.457590] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.905 [2024-10-28 05:11:22.461154] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.905 [2024-10-28 05:11:22.470673] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.905 [2024-10-28 05:11:22.471089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.905 [2024-10-28 05:11:22.471120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.905 [2024-10-28 05:11:22.471138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.905 [2024-10-28 05:11:22.471375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.905 [2024-10-28 05:11:22.471618] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.905 [2024-10-28 05:11:22.471653] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.905 [2024-10-28 05:11:22.471669] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.905 [2024-10-28 05:11:22.475226] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.905 [2024-10-28 05:11:22.484668] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.905 [2024-10-28 05:11:22.485063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.905 [2024-10-28 05:11:22.485094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:31.905 [2024-10-28 05:11:22.485113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:31.905 [2024-10-28 05:11:22.485350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:31.905 [2024-10-28 05:11:22.485593] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.905 [2024-10-28 05:11:22.485616] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.905 [2024-10-28 05:11:22.485631] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.905 [2024-10-28 05:11:22.489199] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.164 [2024-10-28 05:11:22.498648] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.164 [2024-10-28 05:11:22.499054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.164 [2024-10-28 05:11:22.499086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.164 [2024-10-28 05:11:22.499103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.164 [2024-10-28 05:11:22.499341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.164 [2024-10-28 05:11:22.499583] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.164 [2024-10-28 05:11:22.499607] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.164 [2024-10-28 05:11:22.499622] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.164 [2024-10-28 05:11:22.503186] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.164 [2024-10-28 05:11:22.512625] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.164 [2024-10-28 05:11:22.513061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.164 [2024-10-28 05:11:22.513105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.164 [2024-10-28 05:11:22.513121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.164 [2024-10-28 05:11:22.513389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.164 [2024-10-28 05:11:22.513631] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.164 [2024-10-28 05:11:22.513665] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.164 [2024-10-28 05:11:22.513681] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.164 [2024-10-28 05:11:22.517232] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.164 [2024-10-28 05:11:22.526470] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.164 [2024-10-28 05:11:22.526868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.164 [2024-10-28 05:11:22.526900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.164 [2024-10-28 05:11:22.526918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.164 [2024-10-28 05:11:22.527155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.164 [2024-10-28 05:11:22.527397] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.164 [2024-10-28 05:11:22.527420] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.164 [2024-10-28 05:11:22.527435] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.164 [2024-10-28 05:11:22.530997] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.164 [2024-10-28 05:11:22.540432] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.164 [2024-10-28 05:11:22.540834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.164 [2024-10-28 05:11:22.540866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.164 [2024-10-28 05:11:22.540890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.164 [2024-10-28 05:11:22.541128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.164 [2024-10-28 05:11:22.541370] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.164 [2024-10-28 05:11:22.541393] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.164 [2024-10-28 05:11:22.541408] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.164 [2024-10-28 05:11:22.544973] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.164 [2024-10-28 05:11:22.554407] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.164 [2024-10-28 05:11:22.554804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.164 [2024-10-28 05:11:22.554836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.164 [2024-10-28 05:11:22.554853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.164 [2024-10-28 05:11:22.555091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.164 [2024-10-28 05:11:22.555333] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.164 [2024-10-28 05:11:22.555356] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.164 [2024-10-28 05:11:22.555371] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.164 [2024-10-28 05:11:22.558932] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.164 [2024-10-28 05:11:22.568366] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.164 [2024-10-28 05:11:22.568790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.164 [2024-10-28 05:11:22.568817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.164 [2024-10-28 05:11:22.568833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.164 [2024-10-28 05:11:22.569095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.164 [2024-10-28 05:11:22.569338] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.164 [2024-10-28 05:11:22.569361] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.164 [2024-10-28 05:11:22.569376] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.164 [2024-10-28 05:11:22.572938] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.164 [2024-10-28 05:11:22.582160] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.164 [2024-10-28 05:11:22.582593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.164 [2024-10-28 05:11:22.582621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.164 [2024-10-28 05:11:22.582645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.164 [2024-10-28 05:11:22.582884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.164 [2024-10-28 05:11:22.583133] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.164 [2024-10-28 05:11:22.583157] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.164 [2024-10-28 05:11:22.583172] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.164 [2024-10-28 05:11:22.586733] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.165 [2024-10-28 05:11:22.595959] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.165 [2024-10-28 05:11:22.596348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.165 [2024-10-28 05:11:22.596380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.165 [2024-10-28 05:11:22.596397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.165 [2024-10-28 05:11:22.596645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.165 [2024-10-28 05:11:22.596887] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.165 [2024-10-28 05:11:22.596911] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.165 [2024-10-28 05:11:22.596926] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.165 [2024-10-28 05:11:22.600481] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.165 [2024-10-28 05:11:22.609926] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.165 [2024-10-28 05:11:22.610381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.165 [2024-10-28 05:11:22.610422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.165 [2024-10-28 05:11:22.610439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.165 [2024-10-28 05:11:22.610705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.165 [2024-10-28 05:11:22.610948] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.165 [2024-10-28 05:11:22.610972] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.165 [2024-10-28 05:11:22.610986] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.165 [2024-10-28 05:11:22.614535] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.165 [2024-10-28 05:11:22.623775] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.165 [2024-10-28 05:11:22.624189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.165 [2024-10-28 05:11:22.624221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.165 [2024-10-28 05:11:22.624238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.165 [2024-10-28 05:11:22.624475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.165 [2024-10-28 05:11:22.624728] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.165 [2024-10-28 05:11:22.624753] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.165 [2024-10-28 05:11:22.624774] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.165 [2024-10-28 05:11:22.628336] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.165 7008.33 IOPS, 27.38 MiB/s [2024-10-28T04:11:22.761Z] [2024-10-28 05:11:22.638885] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.165 [2024-10-28 05:11:22.639311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.165 [2024-10-28 05:11:22.639355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.165 [2024-10-28 05:11:22.639371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.165 [2024-10-28 05:11:22.639649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.165 [2024-10-28 05:11:22.639892] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.165 [2024-10-28 05:11:22.639916] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.165 [2024-10-28 05:11:22.639931] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.165 [2024-10-28 05:11:22.643478] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.165 [2024-10-28 05:11:22.652704] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.165 [2024-10-28 05:11:22.653128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.165 [2024-10-28 05:11:22.653156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.165 [2024-10-28 05:11:22.653172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.165 [2024-10-28 05:11:22.653412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.165 [2024-10-28 05:11:22.653680] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.165 [2024-10-28 05:11:22.653704] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.165 [2024-10-28 05:11:22.653720] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.165 [2024-10-28 05:11:22.657340] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.165 [2024-10-28 05:11:22.666646] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.165 [2024-10-28 05:11:22.667062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.165 [2024-10-28 05:11:22.667093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.165 [2024-10-28 05:11:22.667111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.165 [2024-10-28 05:11:22.667348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.165 [2024-10-28 05:11:22.667591] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.165 [2024-10-28 05:11:22.667614] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.165 [2024-10-28 05:11:22.667629] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.165 [2024-10-28 05:11:22.671193] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.165 [2024-10-28 05:11:22.680643] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.165 [2024-10-28 05:11:22.681066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.165 [2024-10-28 05:11:22.681097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.165 [2024-10-28 05:11:22.681115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.165 [2024-10-28 05:11:22.681352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.165 [2024-10-28 05:11:22.681595] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.165 [2024-10-28 05:11:22.681618] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.165 [2024-10-28 05:11:22.681643] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.165 [2024-10-28 05:11:22.685200] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.165 [2024-10-28 05:11:22.694430] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.165 [2024-10-28 05:11:22.694868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.165 [2024-10-28 05:11:22.694900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.165 [2024-10-28 05:11:22.694918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.165 [2024-10-28 05:11:22.695154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.165 [2024-10-28 05:11:22.695397] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.165 [2024-10-28 05:11:22.695420] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.165 [2024-10-28 05:11:22.695435] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.165 [2024-10-28 05:11:22.698996] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.165 [2024-10-28 05:11:22.708221] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.165 [2024-10-28 05:11:22.708648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.165 [2024-10-28 05:11:22.708680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.166 [2024-10-28 05:11:22.708698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.166 [2024-10-28 05:11:22.708935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.166 [2024-10-28 05:11:22.709177] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.166 [2024-10-28 05:11:22.709201] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.166 [2024-10-28 05:11:22.709216] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.166 [2024-10-28 05:11:22.712784] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.166 [2024-10-28 05:11:22.722225] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.166 [2024-10-28 05:11:22.722719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.166 [2024-10-28 05:11:22.722750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.166 [2024-10-28 05:11:22.722777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.166 [2024-10-28 05:11:22.723015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.166 [2024-10-28 05:11:22.723257] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.166 [2024-10-28 05:11:22.723281] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.166 [2024-10-28 05:11:22.723297] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.166 [2024-10-28 05:11:22.726880] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.166 [2024-10-28 05:11:22.736113] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.166 [2024-10-28 05:11:22.736527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.166 [2024-10-28 05:11:22.736558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.166 [2024-10-28 05:11:22.736575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.166 [2024-10-28 05:11:22.736824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.166 [2024-10-28 05:11:22.737067] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.166 [2024-10-28 05:11:22.737091] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.166 [2024-10-28 05:11:22.737106] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.166 [2024-10-28 05:11:22.740667] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.166 [2024-10-28 05:11:22.750103] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.166 [2024-10-28 05:11:22.750494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.166 [2024-10-28 05:11:22.750525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.166 [2024-10-28 05:11:22.750542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.166 [2024-10-28 05:11:22.750791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.166 [2024-10-28 05:11:22.751034] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.166 [2024-10-28 05:11:22.751057] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.166 [2024-10-28 05:11:22.751072] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.166 [2024-10-28 05:11:22.754623] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.424 [2024-10-28 05:11:22.764067] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.425 [2024-10-28 05:11:22.764488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.425 [2024-10-28 05:11:22.764519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.425 [2024-10-28 05:11:22.764536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.425 [2024-10-28 05:11:22.764787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.425 [2024-10-28 05:11:22.765036] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.425 [2024-10-28 05:11:22.765060] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.425 [2024-10-28 05:11:22.765075] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.425 [2024-10-28 05:11:22.768628] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.425 [2024-10-28 05:11:22.777860] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.425 [2024-10-28 05:11:22.778259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.425 [2024-10-28 05:11:22.778291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.425 [2024-10-28 05:11:22.778309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.425 [2024-10-28 05:11:22.778547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.425 [2024-10-28 05:11:22.778801] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.425 [2024-10-28 05:11:22.778826] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.425 [2024-10-28 05:11:22.778841] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.425 [2024-10-28 05:11:22.782393] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.425 [2024-10-28 05:11:22.791843] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.425 [2024-10-28 05:11:22.792253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.425 [2024-10-28 05:11:22.792283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.425 [2024-10-28 05:11:22.792301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.425 [2024-10-28 05:11:22.792538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.425 [2024-10-28 05:11:22.792791] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.425 [2024-10-28 05:11:22.792816] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.425 [2024-10-28 05:11:22.792831] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.425 [2024-10-28 05:11:22.796383] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.425 [2024-10-28 05:11:22.805828] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.425 [2024-10-28 05:11:22.806249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.425 [2024-10-28 05:11:22.806280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.425 [2024-10-28 05:11:22.806298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.425 [2024-10-28 05:11:22.806535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.425 [2024-10-28 05:11:22.806789] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.425 [2024-10-28 05:11:22.806813] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.425 [2024-10-28 05:11:22.806834] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.425 [2024-10-28 05:11:22.810393] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.425 [2024-10-28 05:11:22.819626] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.425 [2024-10-28 05:11:22.820033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.425 [2024-10-28 05:11:22.820065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.425 [2024-10-28 05:11:22.820083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.425 [2024-10-28 05:11:22.820321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.425 [2024-10-28 05:11:22.820563] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.425 [2024-10-28 05:11:22.820586] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.425 [2024-10-28 05:11:22.820601] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.425 [2024-10-28 05:11:22.824165] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.425 [2024-10-28 05:11:22.833609] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.425 [2024-10-28 05:11:22.834039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.425 [2024-10-28 05:11:22.834071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.425 [2024-10-28 05:11:22.834089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.425 [2024-10-28 05:11:22.834326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.425 [2024-10-28 05:11:22.834569] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.425 [2024-10-28 05:11:22.834592] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.425 [2024-10-28 05:11:22.834607] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.425 [2024-10-28 05:11:22.838167] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.425 [2024-10-28 05:11:22.847599] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.425 [2024-10-28 05:11:22.848005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.425 [2024-10-28 05:11:22.848036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.425 [2024-10-28 05:11:22.848054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.425 [2024-10-28 05:11:22.848291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.425 [2024-10-28 05:11:22.848533] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.425 [2024-10-28 05:11:22.848557] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.425 [2024-10-28 05:11:22.848572] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.425 [2024-10-28 05:11:22.852135] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.425 [2024-10-28 05:11:22.861575] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.425 [2024-10-28 05:11:22.861969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.425 [2024-10-28 05:11:22.862000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.425 [2024-10-28 05:11:22.862018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.425 [2024-10-28 05:11:22.862255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.425 [2024-10-28 05:11:22.862497] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.425 [2024-10-28 05:11:22.862520] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.425 [2024-10-28 05:11:22.862536] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.425 [2024-10-28 05:11:22.866098] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.425 [2024-10-28 05:11:22.875533] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.425 [2024-10-28 05:11:22.875943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.425 [2024-10-28 05:11:22.875970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.425 [2024-10-28 05:11:22.875986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.425 [2024-10-28 05:11:22.876225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.425 [2024-10-28 05:11:22.876466] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.425 [2024-10-28 05:11:22.876489] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.425 [2024-10-28 05:11:22.876504] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.425 [2024-10-28 05:11:22.880067] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.425 [2024-10-28 05:11:22.889505] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.425 [2024-10-28 05:11:22.889920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.425 [2024-10-28 05:11:22.889951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.425 [2024-10-28 05:11:22.889969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.425 [2024-10-28 05:11:22.890206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.425 [2024-10-28 05:11:22.890448] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.425 [2024-10-28 05:11:22.890472] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.425 [2024-10-28 05:11:22.890486] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.425 [2024-10-28 05:11:22.894047] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.425 [2024-10-28 05:11:22.903506] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.425 [2024-10-28 05:11:22.903928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.425 [2024-10-28 05:11:22.903960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.425 [2024-10-28 05:11:22.903984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.425 [2024-10-28 05:11:22.904222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.425 [2024-10-28 05:11:22.904464] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.425 [2024-10-28 05:11:22.904489] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.425 [2024-10-28 05:11:22.904504] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.425 [2024-10-28 05:11:22.908136] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.425 [2024-10-28 05:11:22.917460] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.425 [2024-10-28 05:11:22.917885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.426 [2024-10-28 05:11:22.917914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.426 [2024-10-28 05:11:22.917930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.426 [2024-10-28 05:11:22.918178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.426 [2024-10-28 05:11:22.918421] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.426 [2024-10-28 05:11:22.918444] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.426 [2024-10-28 05:11:22.918459] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.426 [2024-10-28 05:11:22.922021] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.426 [2024-10-28 05:11:22.931269] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.426 [2024-10-28 05:11:22.931664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.426 [2024-10-28 05:11:22.931696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.426 [2024-10-28 05:11:22.931715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.426 [2024-10-28 05:11:22.931952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.426 [2024-10-28 05:11:22.932194] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.426 [2024-10-28 05:11:22.932217] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.426 [2024-10-28 05:11:22.932232] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.426 [2024-10-28 05:11:22.935797] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.426 [2024-10-28 05:11:22.945228] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.426 [2024-10-28 05:11:22.945619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.426 [2024-10-28 05:11:22.945659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.426 [2024-10-28 05:11:22.945677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.426 [2024-10-28 05:11:22.945915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.426 [2024-10-28 05:11:22.946164] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.426 [2024-10-28 05:11:22.946188] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.426 [2024-10-28 05:11:22.946203] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.426 [2024-10-28 05:11:22.949762] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.426 [2024-10-28 05:11:22.959196] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.426 [2024-10-28 05:11:22.959628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.426 [2024-10-28 05:11:22.959666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.426 [2024-10-28 05:11:22.959684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.426 [2024-10-28 05:11:22.959920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.426 [2024-10-28 05:11:22.960163] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.426 [2024-10-28 05:11:22.960186] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.426 [2024-10-28 05:11:22.960201] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.426 [2024-10-28 05:11:22.963761] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.426 [2024-10-28 05:11:22.972987] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.426 [2024-10-28 05:11:22.973414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.426 [2024-10-28 05:11:22.973458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.426 [2024-10-28 05:11:22.973474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.426 [2024-10-28 05:11:22.973750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.426 [2024-10-28 05:11:22.973993] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.426 [2024-10-28 05:11:22.974016] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.426 [2024-10-28 05:11:22.974031] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.426 [2024-10-28 05:11:22.977581] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.426 [2024-10-28 05:11:22.986808] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.426 [2024-10-28 05:11:22.987193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.426 [2024-10-28 05:11:22.987224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.426 [2024-10-28 05:11:22.987242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.426 [2024-10-28 05:11:22.987478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.426 [2024-10-28 05:11:22.987732] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.426 [2024-10-28 05:11:22.987756] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.426 [2024-10-28 05:11:22.987777] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.426 [2024-10-28 05:11:22.991331] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.426 [2024-10-28 05:11:23.000775] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.426 [2024-10-28 05:11:23.001192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.426 [2024-10-28 05:11:23.001223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.426 [2024-10-28 05:11:23.001241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.426 [2024-10-28 05:11:23.001478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.426 [2024-10-28 05:11:23.001732] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.426 [2024-10-28 05:11:23.001756] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.426 [2024-10-28 05:11:23.001772] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.426 [2024-10-28 05:11:23.005322] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.426 [2024-10-28 05:11:23.014762] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.426 [2024-10-28 05:11:23.015176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.426 [2024-10-28 05:11:23.015207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.426 [2024-10-28 05:11:23.015224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.426 [2024-10-28 05:11:23.015461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.426 [2024-10-28 05:11:23.015716] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.426 [2024-10-28 05:11:23.015740] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.426 [2024-10-28 05:11:23.015756] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.686 [2024-10-28 05:11:23.019310] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.686 [2024-10-28 05:11:23.028770] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.686 [2024-10-28 05:11:23.029172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.686 [2024-10-28 05:11:23.029201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.686 [2024-10-28 05:11:23.029216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.686 [2024-10-28 05:11:23.029451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.686 [2024-10-28 05:11:23.029713] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.686 [2024-10-28 05:11:23.029738] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.686 [2024-10-28 05:11:23.029754] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.686 [2024-10-28 05:11:23.033308] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.686 [2024-10-28 05:11:23.042706] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.686 [2024-10-28 05:11:23.043133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.686 [2024-10-28 05:11:23.043165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.686 [2024-10-28 05:11:23.043183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.686 [2024-10-28 05:11:23.043420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.686 [2024-10-28 05:11:23.043672] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.686 [2024-10-28 05:11:23.043710] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.686 [2024-10-28 05:11:23.043724] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.686 [2024-10-28 05:11:23.047300] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.686 [2024-10-28 05:11:23.056584] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.686 [2024-10-28 05:11:23.057015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.686 [2024-10-28 05:11:23.057046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.686 [2024-10-28 05:11:23.057064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.686 [2024-10-28 05:11:23.057301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.686 [2024-10-28 05:11:23.057543] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.686 [2024-10-28 05:11:23.057566] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.686 [2024-10-28 05:11:23.057582] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.686 [2024-10-28 05:11:23.061111] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.686 [2024-10-28 05:11:23.070491] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.686 [2024-10-28 05:11:23.070929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.686 [2024-10-28 05:11:23.070957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.686 [2024-10-28 05:11:23.070973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.686 [2024-10-28 05:11:23.071214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.686 [2024-10-28 05:11:23.071456] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.686 [2024-10-28 05:11:23.071479] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.686 [2024-10-28 05:11:23.071495] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.686 [2024-10-28 05:11:23.075057] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.686 [2024-10-28 05:11:23.084293] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.686 [2024-10-28 05:11:23.084704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.686 [2024-10-28 05:11:23.084734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.686 [2024-10-28 05:11:23.084756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.686 [2024-10-28 05:11:23.085006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.686 [2024-10-28 05:11:23.085249] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.686 [2024-10-28 05:11:23.085273] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.686 [2024-10-28 05:11:23.085288] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.686 [2024-10-28 05:11:23.088853] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.686 [2024-10-28 05:11:23.098291] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.686 [2024-10-28 05:11:23.098717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.686 [2024-10-28 05:11:23.098749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.686 [2024-10-28 05:11:23.098767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.686 [2024-10-28 05:11:23.099004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.686 [2024-10-28 05:11:23.099246] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.686 [2024-10-28 05:11:23.099270] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.686 [2024-10-28 05:11:23.099284] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.686 [2024-10-28 05:11:23.102850] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.686 [2024-10-28 05:11:23.112093] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.686 [2024-10-28 05:11:23.112568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.686 [2024-10-28 05:11:23.112596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.686 [2024-10-28 05:11:23.112612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.686 [2024-10-28 05:11:23.112862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.686 [2024-10-28 05:11:23.113105] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.686 [2024-10-28 05:11:23.113129] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.686 [2024-10-28 05:11:23.113144] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.686 [2024-10-28 05:11:23.116731] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.686 [2024-10-28 05:11:23.125976] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.686 [2024-10-28 05:11:23.126405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.686 [2024-10-28 05:11:23.126432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.686 [2024-10-28 05:11:23.126448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.686 [2024-10-28 05:11:23.126714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.686 [2024-10-28 05:11:23.126963] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.686 [2024-10-28 05:11:23.126987] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.686 [2024-10-28 05:11:23.127002] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.686 [2024-10-28 05:11:23.130553] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.686 [2024-10-28 05:11:23.139948] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.686 [2024-10-28 05:11:23.140305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.686 [2024-10-28 05:11:23.140333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.686 [2024-10-28 05:11:23.140350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.686 [2024-10-28 05:11:23.140598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.686 [2024-10-28 05:11:23.140849] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.686 [2024-10-28 05:11:23.140871] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.686 [2024-10-28 05:11:23.140884] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.686 [2024-10-28 05:11:23.144475] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.686 [2024-10-28 05:11:23.153813] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.686 [2024-10-28 05:11:23.154251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.687 [2024-10-28 05:11:23.154281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.687 [2024-10-28 05:11:23.154297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.687 [2024-10-28 05:11:23.154535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.687 [2024-10-28 05:11:23.154799] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.687 [2024-10-28 05:11:23.154823] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.687 [2024-10-28 05:11:23.154837] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.687 [2024-10-28 05:11:23.158505] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.687 [2024-10-28 05:11:23.167736] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.687 [2024-10-28 05:11:23.168139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.687 [2024-10-28 05:11:23.168170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.687 [2024-10-28 05:11:23.168188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.687 [2024-10-28 05:11:23.168425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.687 [2024-10-28 05:11:23.168693] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.687 [2024-10-28 05:11:23.168715] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.687 [2024-10-28 05:11:23.168734] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.687 [2024-10-28 05:11:23.172293] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.687 [2024-10-28 05:11:23.181640] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.687 [2024-10-28 05:11:23.181986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.687 [2024-10-28 05:11:23.182014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.687 [2024-10-28 05:11:23.182030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.687 [2024-10-28 05:11:23.182272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.687 [2024-10-28 05:11:23.182523] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.687 [2024-10-28 05:11:23.182547] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.687 [2024-10-28 05:11:23.182562] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.687 [2024-10-28 05:11:23.186182] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.687 [2024-10-28 05:11:23.195393] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.687 [2024-10-28 05:11:23.195802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.687 [2024-10-28 05:11:23.195832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.687 [2024-10-28 05:11:23.195848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.687 [2024-10-28 05:11:23.196094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.687 [2024-10-28 05:11:23.196337] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.687 [2024-10-28 05:11:23.196360] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.687 [2024-10-28 05:11:23.196375] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.687 [2024-10-28 05:11:23.199965] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.687 [2024-10-28 05:11:23.209304] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.687 [2024-10-28 05:11:23.209721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.687 [2024-10-28 05:11:23.209750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.687 [2024-10-28 05:11:23.209766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.687 [2024-10-28 05:11:23.209996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.687 [2024-10-28 05:11:23.210253] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.687 [2024-10-28 05:11:23.210277] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.687 [2024-10-28 05:11:23.210292] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.687 [2024-10-28 05:11:23.213880] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.687 [2024-10-28 05:11:23.223165] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.687 [2024-10-28 05:11:23.223556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.687 [2024-10-28 05:11:23.223587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.687 [2024-10-28 05:11:23.223605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.687 [2024-10-28 05:11:23.223851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.687 [2024-10-28 05:11:23.224095] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.687 [2024-10-28 05:11:23.224118] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.687 [2024-10-28 05:11:23.224133] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.687 [2024-10-28 05:11:23.227709] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.687 [2024-10-28 05:11:23.237166] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.687 [2024-10-28 05:11:23.237556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.687 [2024-10-28 05:11:23.237588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.687 [2024-10-28 05:11:23.237605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.687 [2024-10-28 05:11:23.237852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.687 [2024-10-28 05:11:23.238095] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.687 [2024-10-28 05:11:23.238118] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.687 [2024-10-28 05:11:23.238134] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.687 [2024-10-28 05:11:23.241692] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.687 [2024-10-28 05:11:23.251128] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.687 [2024-10-28 05:11:23.251538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.687 [2024-10-28 05:11:23.251570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.687 [2024-10-28 05:11:23.251588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.687 [2024-10-28 05:11:23.251837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.687 [2024-10-28 05:11:23.252079] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.687 [2024-10-28 05:11:23.252103] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.687 [2024-10-28 05:11:23.252118] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.687 [2024-10-28 05:11:23.255675] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.687 [2024-10-28 05:11:23.265110] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.687 [2024-10-28 05:11:23.265523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.687 [2024-10-28 05:11:23.265550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.687 [2024-10-28 05:11:23.265571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.687 [2024-10-28 05:11:23.265834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.687 [2024-10-28 05:11:23.266078] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.687 [2024-10-28 05:11:23.266101] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.687 [2024-10-28 05:11:23.266116] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.687 [2024-10-28 05:11:23.269674] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.687 [2024-10-28 05:11:23.279030] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.946 [2024-10-28 05:11:23.279460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.946 [2024-10-28 05:11:23.279491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.946 [2024-10-28 05:11:23.279509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.946 [2024-10-28 05:11:23.279755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.946 [2024-10-28 05:11:23.279998] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.946 [2024-10-28 05:11:23.280021] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.946 [2024-10-28 05:11:23.280036] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.946 [2024-10-28 05:11:23.283588] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.946 [2024-10-28 05:11:23.292818] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.946 [2024-10-28 05:11:23.293202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.946 [2024-10-28 05:11:23.293233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.946 [2024-10-28 05:11:23.293252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.946 [2024-10-28 05:11:23.293489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.946 [2024-10-28 05:11:23.293741] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.946 [2024-10-28 05:11:23.293766] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.946 [2024-10-28 05:11:23.293781] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.946 [2024-10-28 05:11:23.297331] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.946 [2024-10-28 05:11:23.306768] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.946 [2024-10-28 05:11:23.307190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.946 [2024-10-28 05:11:23.307220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.946 [2024-10-28 05:11:23.307238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.946 [2024-10-28 05:11:23.307475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.946 [2024-10-28 05:11:23.307733] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.946 [2024-10-28 05:11:23.307758] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.946 [2024-10-28 05:11:23.307774] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.946 [2024-10-28 05:11:23.311329] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.946 [2024-10-28 05:11:23.320550] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.946 [2024-10-28 05:11:23.320981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.946 [2024-10-28 05:11:23.321008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.946 [2024-10-28 05:11:23.321024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.946 [2024-10-28 05:11:23.321257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.946 [2024-10-28 05:11:23.321510] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.946 [2024-10-28 05:11:23.321534] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.946 [2024-10-28 05:11:23.321549] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.946 [2024-10-28 05:11:23.325108] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.946 [2024-10-28 05:11:23.334347] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.946 [2024-10-28 05:11:23.334756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.946 [2024-10-28 05:11:23.334788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.946 [2024-10-28 05:11:23.334806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.946 [2024-10-28 05:11:23.335043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.946 [2024-10-28 05:11:23.335285] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.946 [2024-10-28 05:11:23.335309] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.946 [2024-10-28 05:11:23.335324] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.946 [2024-10-28 05:11:23.338887] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.946 [2024-10-28 05:11:23.348317] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.946 [2024-10-28 05:11:23.348716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.946 [2024-10-28 05:11:23.348748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.946 [2024-10-28 05:11:23.348766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.946 [2024-10-28 05:11:23.349002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.946 [2024-10-28 05:11:23.349245] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.946 [2024-10-28 05:11:23.349268] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.946 [2024-10-28 05:11:23.349283] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.946 [2024-10-28 05:11:23.352851] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.946 [2024-10-28 05:11:23.362286] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.946 [2024-10-28 05:11:23.362689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.946 [2024-10-28 05:11:23.362729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.946 [2024-10-28 05:11:23.362753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.946 [2024-10-28 05:11:23.362993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.947 [2024-10-28 05:11:23.363235] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.947 [2024-10-28 05:11:23.363259] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.947 [2024-10-28 05:11:23.363274] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.947 [2024-10-28 05:11:23.366835] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.947 [2024-10-28 05:11:23.376260] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.947 [2024-10-28 05:11:23.376651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.947 [2024-10-28 05:11:23.376682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.947 [2024-10-28 05:11:23.376700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.947 [2024-10-28 05:11:23.376937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.947 [2024-10-28 05:11:23.377179] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.947 [2024-10-28 05:11:23.377202] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.947 [2024-10-28 05:11:23.377217] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.947 [2024-10-28 05:11:23.380780] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.947 [2024-10-28 05:11:23.390211] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.947 [2024-10-28 05:11:23.390599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.947 [2024-10-28 05:11:23.390630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.947 [2024-10-28 05:11:23.390660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.947 [2024-10-28 05:11:23.390898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.947 [2024-10-28 05:11:23.391140] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.947 [2024-10-28 05:11:23.391163] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.947 [2024-10-28 05:11:23.391178] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.947 [2024-10-28 05:11:23.394734] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.947 [2024-10-28 05:11:23.404167] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.947 [2024-10-28 05:11:23.404591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.947 [2024-10-28 05:11:23.404622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.947 [2024-10-28 05:11:23.404651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.947 [2024-10-28 05:11:23.404890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.947 [2024-10-28 05:11:23.405132] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.947 [2024-10-28 05:11:23.405155] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.947 [2024-10-28 05:11:23.405170] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.947 [2024-10-28 05:11:23.408804] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.947 [2024-10-28 05:11:23.418165] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.947 [2024-10-28 05:11:23.418591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.947 [2024-10-28 05:11:23.418624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.947 [2024-10-28 05:11:23.418651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.947 [2024-10-28 05:11:23.418893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.947 [2024-10-28 05:11:23.419136] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.947 [2024-10-28 05:11:23.419160] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.947 [2024-10-28 05:11:23.419175] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.947 [2024-10-28 05:11:23.422733] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.947 [2024-10-28 05:11:23.432014] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.947 [2024-10-28 05:11:23.432432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.947 [2024-10-28 05:11:23.432464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.947 [2024-10-28 05:11:23.432483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.947 [2024-10-28 05:11:23.432730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.947 [2024-10-28 05:11:23.432975] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.947 [2024-10-28 05:11:23.432998] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.947 [2024-10-28 05:11:23.433013] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.947 [2024-10-28 05:11:23.436572] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.947 [2024-10-28 05:11:23.445825] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.947 [2024-10-28 05:11:23.446241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.947 [2024-10-28 05:11:23.446272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.947 [2024-10-28 05:11:23.446290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.947 [2024-10-28 05:11:23.446533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.947 [2024-10-28 05:11:23.446788] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.947 [2024-10-28 05:11:23.446812] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.947 [2024-10-28 05:11:23.446828] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.947 [2024-10-28 05:11:23.450380] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.947 [2024-10-28 05:11:23.459820] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.947 [2024-10-28 05:11:23.460254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.947 [2024-10-28 05:11:23.460286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.947 [2024-10-28 05:11:23.460304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.947 [2024-10-28 05:11:23.460541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.947 [2024-10-28 05:11:23.460795] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.947 [2024-10-28 05:11:23.460819] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.947 [2024-10-28 05:11:23.460835] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.947 [2024-10-28 05:11:23.464390] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.947 [2024-10-28 05:11:23.473615] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.947 [2024-10-28 05:11:23.474037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.947 [2024-10-28 05:11:23.474069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.947 [2024-10-28 05:11:23.474087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.947 [2024-10-28 05:11:23.474325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.947 [2024-10-28 05:11:23.474567] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.947 [2024-10-28 05:11:23.474590] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.947 [2024-10-28 05:11:23.474605] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.947 [2024-10-28 05:11:23.478167] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.947 [2024-10-28 05:11:23.487605] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.947 [2024-10-28 05:11:23.488015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.947 [2024-10-28 05:11:23.488043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.947 [2024-10-28 05:11:23.488059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.947 [2024-10-28 05:11:23.488297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.947 [2024-10-28 05:11:23.488540] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.947 [2024-10-28 05:11:23.488569] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.947 [2024-10-28 05:11:23.488585] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.947 [2024-10-28 05:11:23.492152] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.947 [2024-10-28 05:11:23.501589] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.947 [2024-10-28 05:11:23.502010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.947 [2024-10-28 05:11:23.502042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.947 [2024-10-28 05:11:23.502060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.947 [2024-10-28 05:11:23.502297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.947 [2024-10-28 05:11:23.502539] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.947 [2024-10-28 05:11:23.502562] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.947 [2024-10-28 05:11:23.502578] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.947 [2024-10-28 05:11:23.506139] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.948 [2024-10-28 05:11:23.515573] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.948 [2024-10-28 05:11:23.515996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.948 [2024-10-28 05:11:23.516027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.948 [2024-10-28 05:11:23.516045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.948 [2024-10-28 05:11:23.516282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.948 [2024-10-28 05:11:23.516524] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.948 [2024-10-28 05:11:23.516547] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.948 [2024-10-28 05:11:23.516563] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.948 [2024-10-28 05:11:23.520125] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.948 [2024-10-28 05:11:23.529578] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.948 [2024-10-28 05:11:23.530011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.948 [2024-10-28 05:11:23.530043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:32.948 [2024-10-28 05:11:23.530060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:32.948 [2024-10-28 05:11:23.530297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:32.948 [2024-10-28 05:11:23.530539] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.948 [2024-10-28 05:11:23.530562] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.948 [2024-10-28 05:11:23.530577] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.948 [2024-10-28 05:11:23.534145] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.206 [2024-10-28 05:11:23.543374] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.206 [2024-10-28 05:11:23.543771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.206 [2024-10-28 05:11:23.543803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.206 [2024-10-28 05:11:23.543821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.206 [2024-10-28 05:11:23.544058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.206 [2024-10-28 05:11:23.544300] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.206 [2024-10-28 05:11:23.544323] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.206 [2024-10-28 05:11:23.544338] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.206 [2024-10-28 05:11:23.547901] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.206 [2024-10-28 05:11:23.557346] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.206 [2024-10-28 05:11:23.557741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.206 [2024-10-28 05:11:23.557773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.206 [2024-10-28 05:11:23.557791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.206 [2024-10-28 05:11:23.558028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.207 [2024-10-28 05:11:23.558270] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.207 [2024-10-28 05:11:23.558294] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.207 [2024-10-28 05:11:23.558309] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.207 [2024-10-28 05:11:23.561871] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.207 [2024-10-28 05:11:23.571315] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.207 [2024-10-28 05:11:23.571701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.207 [2024-10-28 05:11:23.571733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.207 [2024-10-28 05:11:23.571750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.207 [2024-10-28 05:11:23.571988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.207 [2024-10-28 05:11:23.572230] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.207 [2024-10-28 05:11:23.572253] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.207 [2024-10-28 05:11:23.572268] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.207 [2024-10-28 05:11:23.575831] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.207 [2024-10-28 05:11:23.585265] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.207 [2024-10-28 05:11:23.585666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.207 [2024-10-28 05:11:23.585698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.207 [2024-10-28 05:11:23.585716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.207 [2024-10-28 05:11:23.585954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.207 [2024-10-28 05:11:23.586196] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.207 [2024-10-28 05:11:23.586219] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.207 [2024-10-28 05:11:23.586234] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.207 [2024-10-28 05:11:23.589801] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.207 [2024-10-28 05:11:23.599236] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.207 [2024-10-28 05:11:23.599650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.207 [2024-10-28 05:11:23.599681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.207 [2024-10-28 05:11:23.599699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.207 [2024-10-28 05:11:23.599937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.207 [2024-10-28 05:11:23.600179] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.207 [2024-10-28 05:11:23.600202] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.207 [2024-10-28 05:11:23.600217] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.207 [2024-10-28 05:11:23.603778] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.207 [2024-10-28 05:11:23.613218] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.207 [2024-10-28 05:11:23.613716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.207 [2024-10-28 05:11:23.613748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.207 [2024-10-28 05:11:23.613767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.207 [2024-10-28 05:11:23.614004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.207 [2024-10-28 05:11:23.614247] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.207 [2024-10-28 05:11:23.614270] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.207 [2024-10-28 05:11:23.614285] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.207 [2024-10-28 05:11:23.617852] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.207 [2024-10-28 05:11:23.627124] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.207 [2024-10-28 05:11:23.627517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.207 [2024-10-28 05:11:23.627548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.207 [2024-10-28 05:11:23.627565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.207 [2024-10-28 05:11:23.627820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.207 [2024-10-28 05:11:23.628063] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.207 [2024-10-28 05:11:23.628087] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.207 [2024-10-28 05:11:23.628102] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.207 [2024-10-28 05:11:23.631670] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.207 5256.25 IOPS, 20.53 MiB/s [2024-10-28T04:11:23.803Z] [2024-10-28 05:11:23.640939] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.207 [2024-10-28 05:11:23.641355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.207 [2024-10-28 05:11:23.641386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.207 [2024-10-28 05:11:23.641404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.207 [2024-10-28 05:11:23.641652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.207 [2024-10-28 05:11:23.641894] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.207 [2024-10-28 05:11:23.641918] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.207 [2024-10-28 05:11:23.641933] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.207 [2024-10-28 05:11:23.645489] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.207 [2024-10-28 05:11:23.654747] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.207 [2024-10-28 05:11:23.655147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.207 [2024-10-28 05:11:23.655179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.207 [2024-10-28 05:11:23.655197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.207 [2024-10-28 05:11:23.655435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.207 [2024-10-28 05:11:23.655689] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.207 [2024-10-28 05:11:23.655714] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.207 [2024-10-28 05:11:23.655729] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.207 [2024-10-28 05:11:23.659319] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.207 [2024-10-28 05:11:23.668652] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.207 [2024-10-28 05:11:23.669085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.207 [2024-10-28 05:11:23.669117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.207 [2024-10-28 05:11:23.669135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.207 [2024-10-28 05:11:23.669375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.207 [2024-10-28 05:11:23.669619] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.207 [2024-10-28 05:11:23.669661] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.207 [2024-10-28 05:11:23.669679] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.207 [2024-10-28 05:11:23.673259] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.207 [2024-10-28 05:11:23.682532] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.207 [2024-10-28 05:11:23.682943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.207 [2024-10-28 05:11:23.682976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.207 [2024-10-28 05:11:23.682994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.207 [2024-10-28 05:11:23.683232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.207 [2024-10-28 05:11:23.683474] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.207 [2024-10-28 05:11:23.683498] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.207 [2024-10-28 05:11:23.683514] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.207 [2024-10-28 05:11:23.687077] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.207 [2024-10-28 05:11:23.696518] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.207 [2024-10-28 05:11:23.696929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.207 [2024-10-28 05:11:23.696957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.207 [2024-10-28 05:11:23.696973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.207 [2024-10-28 05:11:23.697228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.207 [2024-10-28 05:11:23.697470] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.207 [2024-10-28 05:11:23.697494] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.207 [2024-10-28 05:11:23.697509] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.208 [2024-10-28 05:11:23.701076] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.208 [2024-10-28 05:11:23.710311] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.208 [2024-10-28 05:11:23.710699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.208 [2024-10-28 05:11:23.710731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.208 [2024-10-28 05:11:23.710749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.208 [2024-10-28 05:11:23.710987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.208 [2024-10-28 05:11:23.711229] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.208 [2024-10-28 05:11:23.711252] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.208 [2024-10-28 05:11:23.711267] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.208 [2024-10-28 05:11:23.714840] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.208 [2024-10-28 05:11:23.724277] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.208 [2024-10-28 05:11:23.724675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.208 [2024-10-28 05:11:23.724703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.208 [2024-10-28 05:11:23.724718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.208 [2024-10-28 05:11:23.724958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.208 [2024-10-28 05:11:23.725200] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.208 [2024-10-28 05:11:23.725223] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.208 [2024-10-28 05:11:23.725238] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.208 [2024-10-28 05:11:23.728817] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.208 [2024-10-28 05:11:23.738265] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.208 [2024-10-28 05:11:23.738664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.208 [2024-10-28 05:11:23.738695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.208 [2024-10-28 05:11:23.738713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.208 [2024-10-28 05:11:23.738949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.208 [2024-10-28 05:11:23.739192] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.208 [2024-10-28 05:11:23.739215] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.208 [2024-10-28 05:11:23.739229] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.208 [2024-10-28 05:11:23.742793] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.208 [2024-10-28 05:11:23.752227] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.208 [2024-10-28 05:11:23.752639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.208 [2024-10-28 05:11:23.752672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.208 [2024-10-28 05:11:23.752691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.208 [2024-10-28 05:11:23.752928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.208 [2024-10-28 05:11:23.753171] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.208 [2024-10-28 05:11:23.753195] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.208 [2024-10-28 05:11:23.753210] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.208 [2024-10-28 05:11:23.756774] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.208 [2024-10-28 05:11:23.766214] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.208 [2024-10-28 05:11:23.766605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.208 [2024-10-28 05:11:23.766644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.208 [2024-10-28 05:11:23.766664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.208 [2024-10-28 05:11:23.766903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.208 [2024-10-28 05:11:23.767145] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.208 [2024-10-28 05:11:23.767168] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.208 [2024-10-28 05:11:23.767183] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.208 [2024-10-28 05:11:23.770744] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.208 [2024-10-28 05:11:23.780185] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.208 [2024-10-28 05:11:23.780608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.208 [2024-10-28 05:11:23.780642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.208 [2024-10-28 05:11:23.780675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.208 [2024-10-28 05:11:23.780937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.208 [2024-10-28 05:11:23.781180] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.208 [2024-10-28 05:11:23.781203] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.208 [2024-10-28 05:11:23.781218] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.208 [2024-10-28 05:11:23.784783] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.208 [2024-10-28 05:11:23.794015] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.208 [2024-10-28 05:11:23.794432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.208 [2024-10-28 05:11:23.794463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.208 [2024-10-28 05:11:23.794481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.208 [2024-10-28 05:11:23.794730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.208 [2024-10-28 05:11:23.794972] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.208 [2024-10-28 05:11:23.794995] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.208 [2024-10-28 05:11:23.795010] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.208 [2024-10-28 05:11:23.798560] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.467 [2024-10-28 05:11:23.808004] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.467 [2024-10-28 05:11:23.808429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-10-28 05:11:23.808457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.467 [2024-10-28 05:11:23.808473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.467 [2024-10-28 05:11:23.808738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.467 [2024-10-28 05:11:23.808982] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.467 [2024-10-28 05:11:23.809005] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.467 [2024-10-28 05:11:23.809020] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.467 [2024-10-28 05:11:23.812578] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.468 [2024-10-28 05:11:23.821814] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.468 [2024-10-28 05:11:23.822226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-10-28 05:11:23.822257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.468 [2024-10-28 05:11:23.822275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.468 [2024-10-28 05:11:23.822512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.468 [2024-10-28 05:11:23.822766] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.468 [2024-10-28 05:11:23.822791] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.468 [2024-10-28 05:11:23.822806] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.468 [2024-10-28 05:11:23.826363] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.468 [2024-10-28 05:11:23.835612] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.468 [2024-10-28 05:11:23.836034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-10-28 05:11:23.836065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.468 [2024-10-28 05:11:23.836083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.468 [2024-10-28 05:11:23.836320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.468 [2024-10-28 05:11:23.836561] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.468 [2024-10-28 05:11:23.836585] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.468 [2024-10-28 05:11:23.836600] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.468 [2024-10-28 05:11:23.840159] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.468 [2024-10-28 05:11:23.849597] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.468 [2024-10-28 05:11:23.849991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-10-28 05:11:23.850023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.468 [2024-10-28 05:11:23.850041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.468 [2024-10-28 05:11:23.850279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.468 [2024-10-28 05:11:23.850520] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.468 [2024-10-28 05:11:23.850549] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.468 [2024-10-28 05:11:23.850565] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.468 [2024-10-28 05:11:23.854127] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.468 [2024-10-28 05:11:23.863574] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.468 [2024-10-28 05:11:23.863984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-10-28 05:11:23.864016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.468 [2024-10-28 05:11:23.864035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.468 [2024-10-28 05:11:23.864272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.468 [2024-10-28 05:11:23.864513] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.468 [2024-10-28 05:11:23.864537] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.468 [2024-10-28 05:11:23.864552] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.468 [2024-10-28 05:11:23.868112] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.468 [2024-10-28 05:11:23.877546] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.468 [2024-10-28 05:11:23.877956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-10-28 05:11:23.877988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.468 [2024-10-28 05:11:23.878006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.468 [2024-10-28 05:11:23.878243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.468 [2024-10-28 05:11:23.878485] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.468 [2024-10-28 05:11:23.878509] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.468 [2024-10-28 05:11:23.878524] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.468 [2024-10-28 05:11:23.882086] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.468 [2024-10-28 05:11:23.891522] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.468 [2024-10-28 05:11:23.891916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-10-28 05:11:23.891947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.468 [2024-10-28 05:11:23.891964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.468 [2024-10-28 05:11:23.892201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.468 [2024-10-28 05:11:23.892443] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.468 [2024-10-28 05:11:23.892467] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.468 [2024-10-28 05:11:23.892482] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.468 [2024-10-28 05:11:23.896052] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.468 [2024-10-28 05:11:23.905490] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.468 [2024-10-28 05:11:23.905909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-10-28 05:11:23.905941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.468 [2024-10-28 05:11:23.905958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.468 [2024-10-28 05:11:23.906195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.468 [2024-10-28 05:11:23.906438] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.468 [2024-10-28 05:11:23.906462] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.468 [2024-10-28 05:11:23.906478] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.468 [2024-10-28 05:11:23.910073] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.468 [2024-10-28 05:11:23.919446] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.468 [2024-10-28 05:11:23.919863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-10-28 05:11:23.919893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.468 [2024-10-28 05:11:23.919911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.468 [2024-10-28 05:11:23.920176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.468 [2024-10-28 05:11:23.920419] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.468 [2024-10-28 05:11:23.920444] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.468 [2024-10-28 05:11:23.920465] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.468 [2024-10-28 05:11:23.924069] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.468 [2024-10-28 05:11:23.933327] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.468 [2024-10-28 05:11:23.933745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-10-28 05:11:23.933777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.468 [2024-10-28 05:11:23.933796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.468 [2024-10-28 05:11:23.934034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.468 [2024-10-28 05:11:23.934277] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.468 [2024-10-28 05:11:23.934300] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.468 [2024-10-28 05:11:23.934316] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.468 [2024-10-28 05:11:23.937880] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.468 [2024-10-28 05:11:23.947321] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.468 [2024-10-28 05:11:23.947735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-10-28 05:11:23.947767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.468 [2024-10-28 05:11:23.947785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.468 [2024-10-28 05:11:23.948023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.468 [2024-10-28 05:11:23.948265] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.469 [2024-10-28 05:11:23.948288] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.469 [2024-10-28 05:11:23.948304] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.469 [2024-10-28 05:11:23.951867] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.469 [2024-10-28 05:11:23.961313] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.469 [2024-10-28 05:11:23.961743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-10-28 05:11:23.961786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.469 [2024-10-28 05:11:23.961801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.469 [2024-10-28 05:11:23.962071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.469 [2024-10-28 05:11:23.962313] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.469 [2024-10-28 05:11:23.962336] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.469 [2024-10-28 05:11:23.962351] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.469 [2024-10-28 05:11:23.965920] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.469 [2024-10-28 05:11:23.975155] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.469 [2024-10-28 05:11:23.975591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-10-28 05:11:23.975619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.469 [2024-10-28 05:11:23.975659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.469 [2024-10-28 05:11:23.975906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.469 [2024-10-28 05:11:23.976148] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.469 [2024-10-28 05:11:23.976171] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.469 [2024-10-28 05:11:23.976186] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.469 [2024-10-28 05:11:23.979740] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.469 [2024-10-28 05:11:23.988969] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.469 [2024-10-28 05:11:23.989360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-10-28 05:11:23.989391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.469 [2024-10-28 05:11:23.989409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.469 [2024-10-28 05:11:23.989664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.469 [2024-10-28 05:11:23.989906] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.469 [2024-10-28 05:11:23.989930] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.469 [2024-10-28 05:11:23.989944] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.469 [2024-10-28 05:11:23.993490] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.469 [2024-10-28 05:11:24.002938] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.469 [2024-10-28 05:11:24.003326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-10-28 05:11:24.003357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.469 [2024-10-28 05:11:24.003374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.469 [2024-10-28 05:11:24.003611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.469 [2024-10-28 05:11:24.003864] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.469 [2024-10-28 05:11:24.003888] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.469 [2024-10-28 05:11:24.003903] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.469 [2024-10-28 05:11:24.007458] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.469 [2024-10-28 05:11:24.016905] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.469 [2024-10-28 05:11:24.017295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-10-28 05:11:24.017326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.469 [2024-10-28 05:11:24.017344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.469 [2024-10-28 05:11:24.017581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.469 [2024-10-28 05:11:24.017833] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.469 [2024-10-28 05:11:24.017858] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.469 [2024-10-28 05:11:24.017873] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.469 [2024-10-28 05:11:24.021424] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.469 [2024-10-28 05:11:24.030894] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.469 [2024-10-28 05:11:24.031316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-10-28 05:11:24.031348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.469 [2024-10-28 05:11:24.031365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.469 [2024-10-28 05:11:24.031602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.469 [2024-10-28 05:11:24.031855] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.469 [2024-10-28 05:11:24.031885] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.469 [2024-10-28 05:11:24.031902] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.469 [2024-10-28 05:11:24.035458] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.469 [2024-10-28 05:11:24.044709] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.469 [2024-10-28 05:11:24.045129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-10-28 05:11:24.045160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.469 [2024-10-28 05:11:24.045178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.469 [2024-10-28 05:11:24.045415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.469 [2024-10-28 05:11:24.045669] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.469 [2024-10-28 05:11:24.045693] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.469 [2024-10-28 05:11:24.045709] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.469 [2024-10-28 05:11:24.049265] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.469 [2024-10-28 05:11:24.058511] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.469 [2024-10-28 05:11:24.058901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-10-28 05:11:24.058933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.469 [2024-10-28 05:11:24.058951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.469 [2024-10-28 05:11:24.059189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.469 [2024-10-28 05:11:24.059431] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.469 [2024-10-28 05:11:24.059454] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.469 [2024-10-28 05:11:24.059469] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.729 [2024-10-28 05:11:24.063035] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.729 [2024-10-28 05:11:24.072311] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.729 [2024-10-28 05:11:24.072714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.729 [2024-10-28 05:11:24.072743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.729 [2024-10-28 05:11:24.072759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.729 [2024-10-28 05:11:24.072999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.729 [2024-10-28 05:11:24.073242] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.729 [2024-10-28 05:11:24.073266] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.729 [2024-10-28 05:11:24.073281] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.729 [2024-10-28 05:11:24.076860] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.729 [2024-10-28 05:11:24.086113] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.729 [2024-10-28 05:11:24.086533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.729 [2024-10-28 05:11:24.086566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.729 [2024-10-28 05:11:24.086584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.729 [2024-10-28 05:11:24.086835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.729 [2024-10-28 05:11:24.087078] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.729 [2024-10-28 05:11:24.087102] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.729 [2024-10-28 05:11:24.087117] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.729 [2024-10-28 05:11:24.090681] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.729 [2024-10-28 05:11:24.099921] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.729 [2024-10-28 05:11:24.100344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.729 [2024-10-28 05:11:24.100375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.729 [2024-10-28 05:11:24.100392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.729 [2024-10-28 05:11:24.100629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.729 [2024-10-28 05:11:24.100884] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.729 [2024-10-28 05:11:24.100907] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.729 [2024-10-28 05:11:24.100923] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.729 [2024-10-28 05:11:24.104479] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.729 [2024-10-28 05:11:24.113728] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.729 [2024-10-28 05:11:24.114138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.729 [2024-10-28 05:11:24.114170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.729 [2024-10-28 05:11:24.114188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.729 [2024-10-28 05:11:24.114425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.729 [2024-10-28 05:11:24.114680] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.729 [2024-10-28 05:11:24.114704] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.729 [2024-10-28 05:11:24.114719] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.729 [2024-10-28 05:11:24.118275] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.729 [2024-10-28 05:11:24.127745] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.729 [2024-10-28 05:11:24.128169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.729 [2024-10-28 05:11:24.128217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.729 [2024-10-28 05:11:24.128234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.729 [2024-10-28 05:11:24.128501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.729 [2024-10-28 05:11:24.128755] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.729 [2024-10-28 05:11:24.128779] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.729 [2024-10-28 05:11:24.128794] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.729 [2024-10-28 05:11:24.132353] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.729 [2024-10-28 05:11:24.141598] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.729 [2024-10-28 05:11:24.141973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.729 [2024-10-28 05:11:24.142005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.729 [2024-10-28 05:11:24.142023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.729 [2024-10-28 05:11:24.142260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.729 [2024-10-28 05:11:24.142502] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.729 [2024-10-28 05:11:24.142525] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.729 [2024-10-28 05:11:24.142540] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.729 [2024-10-28 05:11:24.146107] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.729 [2024-10-28 05:11:24.155552] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.729 [2024-10-28 05:11:24.155971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.729 [2024-10-28 05:11:24.156003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.729 [2024-10-28 05:11:24.156020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.729 [2024-10-28 05:11:24.156258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.729 [2024-10-28 05:11:24.156502] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.729 [2024-10-28 05:11:24.156526] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.729 [2024-10-28 05:11:24.156541] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.729 [2024-10-28 05:11:24.160135] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.729 [2024-10-28 05:11:24.169486] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.729 [2024-10-28 05:11:24.169907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.730 [2024-10-28 05:11:24.169939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.730 [2024-10-28 05:11:24.169957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.730 [2024-10-28 05:11:24.170203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.730 [2024-10-28 05:11:24.170453] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.730 [2024-10-28 05:11:24.170478] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.730 [2024-10-28 05:11:24.170494] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.730 [2024-10-28 05:11:24.174095] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.730 [2024-10-28 05:11:24.183349] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.730 [2024-10-28 05:11:24.183766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.730 [2024-10-28 05:11:24.183798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.730 [2024-10-28 05:11:24.183816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.730 [2024-10-28 05:11:24.184053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.730 [2024-10-28 05:11:24.184295] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.730 [2024-10-28 05:11:24.184318] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.730 [2024-10-28 05:11:24.184334] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.730 [2024-10-28 05:11:24.187901] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.730 [2024-10-28 05:11:24.197351] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.730 [2024-10-28 05:11:24.197776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.730 [2024-10-28 05:11:24.197808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.730 [2024-10-28 05:11:24.197826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.730 [2024-10-28 05:11:24.198062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.730 [2024-10-28 05:11:24.198305] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.730 [2024-10-28 05:11:24.198328] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.730 [2024-10-28 05:11:24.198343] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.730 [2024-10-28 05:11:24.201918] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.730 [2024-10-28 05:11:24.211158] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.730 [2024-10-28 05:11:24.211598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.730 [2024-10-28 05:11:24.211663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.730 [2024-10-28 05:11:24.211684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.730 [2024-10-28 05:11:24.211920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.730 [2024-10-28 05:11:24.212162] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.730 [2024-10-28 05:11:24.212191] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.730 [2024-10-28 05:11:24.212207] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.730 [2024-10-28 05:11:24.215777] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.730 [2024-10-28 05:11:24.225021] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.730 [2024-10-28 05:11:24.225431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.730 [2024-10-28 05:11:24.225463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.730 [2024-10-28 05:11:24.225481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.730 [2024-10-28 05:11:24.225731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.730 [2024-10-28 05:11:24.225974] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.730 [2024-10-28 05:11:24.225998] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.730 [2024-10-28 05:11:24.226013] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.730 [2024-10-28 05:11:24.229588] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.730 [2024-10-28 05:11:24.238845] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.730 [2024-10-28 05:11:24.239245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.730 [2024-10-28 05:11:24.239277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.730 [2024-10-28 05:11:24.239295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.730 [2024-10-28 05:11:24.239534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.730 [2024-10-28 05:11:24.239788] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.730 [2024-10-28 05:11:24.239813] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.730 [2024-10-28 05:11:24.239828] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.730 [2024-10-28 05:11:24.243390] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.730 [2024-10-28 05:11:24.252859] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.730 [2024-10-28 05:11:24.253323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.730 [2024-10-28 05:11:24.253350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.730 [2024-10-28 05:11:24.253381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.730 [2024-10-28 05:11:24.253621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.730 [2024-10-28 05:11:24.253865] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.730 [2024-10-28 05:11:24.253885] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.730 [2024-10-28 05:11:24.253897] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.730 [2024-10-28 05:11:24.257463] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.730 [2024-10-28 05:11:24.266730] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.730 [2024-10-28 05:11:24.267157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.730 [2024-10-28 05:11:24.267201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.730 [2024-10-28 05:11:24.267217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.730 [2024-10-28 05:11:24.267485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.730 [2024-10-28 05:11:24.267738] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.730 [2024-10-28 05:11:24.267763] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.730 [2024-10-28 05:11:24.267778] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.730 [2024-10-28 05:11:24.271333] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.730 [2024-10-28 05:11:24.280571] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.730 [2024-10-28 05:11:24.280942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.730 [2024-10-28 05:11:24.280974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.730 [2024-10-28 05:11:24.280992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.730 [2024-10-28 05:11:24.281229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.730 [2024-10-28 05:11:24.281472] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.730 [2024-10-28 05:11:24.281496] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.730 [2024-10-28 05:11:24.281511] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.730 [2024-10-28 05:11:24.285075] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.730 [2024-10-28 05:11:24.294387] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.730 [2024-10-28 05:11:24.294793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.730 [2024-10-28 05:11:24.294824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.730 [2024-10-28 05:11:24.294842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.730 [2024-10-28 05:11:24.295078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.730 [2024-10-28 05:11:24.295320] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.730 [2024-10-28 05:11:24.295344] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.730 [2024-10-28 05:11:24.295359] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.730 [2024-10-28 05:11:24.298920] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.730 [2024-10-28 05:11:24.308359] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.730 [2024-10-28 05:11:24.308802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.730 [2024-10-28 05:11:24.308839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.730 [2024-10-28 05:11:24.308857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.730 [2024-10-28 05:11:24.309094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.730 [2024-10-28 05:11:24.309336] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.731 [2024-10-28 05:11:24.309359] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.731 [2024-10-28 05:11:24.309374] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.731 [2024-10-28 05:11:24.312947] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.731 [2024-10-28 05:11:24.322188] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.990 [2024-10-28 05:11:24.322601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.990 [2024-10-28 05:11:24.322632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.990 [2024-10-28 05:11:24.322690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.990 [2024-10-28 05:11:24.322927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.990 [2024-10-28 05:11:24.323168] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.990 [2024-10-28 05:11:24.323192] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.990 [2024-10-28 05:11:24.323207] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.990 [2024-10-28 05:11:24.326769] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.990 [2024-10-28 05:11:24.336015] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.990 [2024-10-28 05:11:24.336406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.990 [2024-10-28 05:11:24.336449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.990 [2024-10-28 05:11:24.336465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.990 [2024-10-28 05:11:24.336710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.990 [2024-10-28 05:11:24.336954] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.990 [2024-10-28 05:11:24.336978] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.990 [2024-10-28 05:11:24.336994] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.990 [2024-10-28 05:11:24.340547] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.990 [2024-10-28 05:11:24.350000] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.990 [2024-10-28 05:11:24.350389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.990 [2024-10-28 05:11:24.350421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.990 [2024-10-28 05:11:24.350438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.990 [2024-10-28 05:11:24.350694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.990 [2024-10-28 05:11:24.350937] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.990 [2024-10-28 05:11:24.350962] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.990 [2024-10-28 05:11:24.350978] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.990 [2024-10-28 05:11:24.354531] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.990 [2024-10-28 05:11:24.363979] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.990 [2024-10-28 05:11:24.364390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.991 [2024-10-28 05:11:24.364422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.991 [2024-10-28 05:11:24.364440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.991 [2024-10-28 05:11:24.364688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.991 [2024-10-28 05:11:24.364942] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.991 [2024-10-28 05:11:24.364966] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.991 [2024-10-28 05:11:24.364982] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.991 [2024-10-28 05:11:24.368532] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.991 [2024-10-28 05:11:24.377774] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.991 [2024-10-28 05:11:24.378226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.991 [2024-10-28 05:11:24.378254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.991 [2024-10-28 05:11:24.378269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.991 [2024-10-28 05:11:24.378511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.991 [2024-10-28 05:11:24.378766] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.991 [2024-10-28 05:11:24.378791] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.991 [2024-10-28 05:11:24.378807] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.991 [2024-10-28 05:11:24.382365] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.991 [2024-10-28 05:11:24.391631] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.991 [2024-10-28 05:11:24.392080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.991 [2024-10-28 05:11:24.392130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.991 [2024-10-28 05:11:24.392148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.991 [2024-10-28 05:11:24.392385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.991 [2024-10-28 05:11:24.392627] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.991 [2024-10-28 05:11:24.392663] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.991 [2024-10-28 05:11:24.392688] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.991 [2024-10-28 05:11:24.396244] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.991 [2024-10-28 05:11:24.405489] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.991 [2024-10-28 05:11:24.405913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.991 [2024-10-28 05:11:24.405947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.991 [2024-10-28 05:11:24.405965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.991 [2024-10-28 05:11:24.406203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.991 [2024-10-28 05:11:24.406445] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.991 [2024-10-28 05:11:24.406483] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.991 [2024-10-28 05:11:24.406499] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.991 [2024-10-28 05:11:24.410106] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.991 [2024-10-28 05:11:24.419372] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.991 [2024-10-28 05:11:24.419840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.991 [2024-10-28 05:11:24.419870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.991 [2024-10-28 05:11:24.419893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.991 [2024-10-28 05:11:24.420146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.991 [2024-10-28 05:11:24.420341] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.991 [2024-10-28 05:11:24.420361] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.991 [2024-10-28 05:11:24.420375] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.991 [2024-10-28 05:11:24.423501] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.991 [2024-10-28 05:11:24.433294] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.991 [2024-10-28 05:11:24.433709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.991 [2024-10-28 05:11:24.433740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.991 [2024-10-28 05:11:24.433757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.991 [2024-10-28 05:11:24.433999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.991 [2024-10-28 05:11:24.434242] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.991 [2024-10-28 05:11:24.434266] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.991 [2024-10-28 05:11:24.434283] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.991 [2024-10-28 05:11:24.437855] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.991 [2024-10-28 05:11:24.447057] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.991 [2024-10-28 05:11:24.447448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.991 [2024-10-28 05:11:24.447480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.991 [2024-10-28 05:11:24.447499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.991 [2024-10-28 05:11:24.447769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.991 [2024-10-28 05:11:24.448003] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.991 [2024-10-28 05:11:24.448027] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.991 [2024-10-28 05:11:24.448043] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.991 [2024-10-28 05:11:24.451662] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.991 [2024-10-28 05:11:24.460875] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.991 [2024-10-28 05:11:24.461368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.991 [2024-10-28 05:11:24.461396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.991 [2024-10-28 05:11:24.461411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.991 [2024-10-28 05:11:24.461676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.991 [2024-10-28 05:11:24.461919] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.991 [2024-10-28 05:11:24.461944] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.991 [2024-10-28 05:11:24.461960] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.991 [2024-10-28 05:11:24.465516] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.991 [2024-10-28 05:11:24.474750] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.991 [2024-10-28 05:11:24.475163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.991 [2024-10-28 05:11:24.475191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.991 [2024-10-28 05:11:24.475207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.991 [2024-10-28 05:11:24.475431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.991 [2024-10-28 05:11:24.475687] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.991 [2024-10-28 05:11:24.475713] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.991 [2024-10-28 05:11:24.475730] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.991 [2024-10-28 05:11:24.479285] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.991 [2024-10-28 05:11:24.488721] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.991 [2024-10-28 05:11:24.489108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.991 [2024-10-28 05:11:24.489144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.991 [2024-10-28 05:11:24.489163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.991 [2024-10-28 05:11:24.489401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.991 [2024-10-28 05:11:24.489657] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.991 [2024-10-28 05:11:24.489682] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.991 [2024-10-28 05:11:24.489699] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.991 [2024-10-28 05:11:24.493253] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.991 [2024-10-28 05:11:24.502700] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.991 [2024-10-28 05:11:24.503126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.991 [2024-10-28 05:11:24.503154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.991 [2024-10-28 05:11:24.503170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.991 [2024-10-28 05:11:24.503415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.991 [2024-10-28 05:11:24.503672] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.992 [2024-10-28 05:11:24.503697] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.992 [2024-10-28 05:11:24.503713] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.992 [2024-10-28 05:11:24.507266] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.992 [2024-10-28 05:11:24.516494] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.992 [2024-10-28 05:11:24.516921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.992 [2024-10-28 05:11:24.516952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.992 [2024-10-28 05:11:24.516970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.992 [2024-10-28 05:11:24.517208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.992 [2024-10-28 05:11:24.517451] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.992 [2024-10-28 05:11:24.517475] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.992 [2024-10-28 05:11:24.517491] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.992 [2024-10-28 05:11:24.521054] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.992 [2024-10-28 05:11:24.530300] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.992 [2024-10-28 05:11:24.530717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.992 [2024-10-28 05:11:24.530750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.992 [2024-10-28 05:11:24.530768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.992 [2024-10-28 05:11:24.531011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.992 [2024-10-28 05:11:24.531254] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.992 [2024-10-28 05:11:24.531279] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.992 [2024-10-28 05:11:24.531295] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.992 [2024-10-28 05:11:24.534856] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.992 [2024-10-28 05:11:24.544293] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.992 [2024-10-28 05:11:24.544688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.992 [2024-10-28 05:11:24.544720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.992 [2024-10-28 05:11:24.544739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.992 [2024-10-28 05:11:24.544976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.992 [2024-10-28 05:11:24.545219] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.992 [2024-10-28 05:11:24.545245] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.992 [2024-10-28 05:11:24.545261] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.992 [2024-10-28 05:11:24.548821] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.992 [2024-10-28 05:11:24.558254] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.992 [2024-10-28 05:11:24.558668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.992 [2024-10-28 05:11:24.558700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.992 [2024-10-28 05:11:24.558718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.992 [2024-10-28 05:11:24.558956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.992 [2024-10-28 05:11:24.559199] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.992 [2024-10-28 05:11:24.559223] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.992 [2024-10-28 05:11:24.559240] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.992 [2024-10-28 05:11:24.562804] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.992 [2024-10-28 05:11:24.572241] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.992 [2024-10-28 05:11:24.572663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.992 [2024-10-28 05:11:24.572696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:33.992 [2024-10-28 05:11:24.572714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:33.992 [2024-10-28 05:11:24.572952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:33.992 [2024-10-28 05:11:24.573195] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.992 [2024-10-28 05:11:24.573220] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.992 [2024-10-28 05:11:24.573241] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.992 [2024-10-28 05:11:24.576806] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.252 [2024-10-28 05:11:24.586033] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.252 [2024-10-28 05:11:24.586459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.252 [2024-10-28 05:11:24.586491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.252 [2024-10-28 05:11:24.586510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.252 [2024-10-28 05:11:24.586759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.252 [2024-10-28 05:11:24.587004] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.252 [2024-10-28 05:11:24.587028] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.252 [2024-10-28 05:11:24.587045] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.252 [2024-10-28 05:11:24.590598] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.252 [2024-10-28 05:11:24.599830] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.252 [2024-10-28 05:11:24.600240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.252 [2024-10-28 05:11:24.600272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.252 [2024-10-28 05:11:24.600290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.252 [2024-10-28 05:11:24.600527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.252 [2024-10-28 05:11:24.600783] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.252 [2024-10-28 05:11:24.600809] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.252 [2024-10-28 05:11:24.600825] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.252 [2024-10-28 05:11:24.604378] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.252 [2024-10-28 05:11:24.613827] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.252 [2024-10-28 05:11:24.614262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.252 [2024-10-28 05:11:24.614294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.252 [2024-10-28 05:11:24.614312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.252 [2024-10-28 05:11:24.614550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.252 [2024-10-28 05:11:24.614804] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.252 [2024-10-28 05:11:24.614830] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.252 [2024-10-28 05:11:24.614846] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.252 [2024-10-28 05:11:24.618400] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.252 [2024-10-28 05:11:24.627642] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.252 [2024-10-28 05:11:24.628180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.252 [2024-10-28 05:11:24.628233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.252 [2024-10-28 05:11:24.628252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.252 [2024-10-28 05:11:24.628499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.252 [2024-10-28 05:11:24.628754] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.252 [2024-10-28 05:11:24.628780] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.252 [2024-10-28 05:11:24.628796] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.252 [2024-10-28 05:11:24.632350] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.252 4205.00 IOPS, 16.43 MiB/s [2024-10-28T04:11:24.848Z] [2024-10-28 05:11:24.641628] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.252 [2024-10-28 05:11:24.642053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.252 [2024-10-28 05:11:24.642085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.252 [2024-10-28 05:11:24.642104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.252 [2024-10-28 05:11:24.642342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.252 [2024-10-28 05:11:24.642586] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.252 [2024-10-28 05:11:24.642610] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.252 [2024-10-28 05:11:24.642626] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.252 [2024-10-28 05:11:24.646188] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.252 [2024-10-28 05:11:24.655619] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.252 [2024-10-28 05:11:24.656054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.253 [2024-10-28 05:11:24.656082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.253 [2024-10-28 05:11:24.656098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.253 [2024-10-28 05:11:24.656341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.253 [2024-10-28 05:11:24.656586] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.253 [2024-10-28 05:11:24.656610] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.253 [2024-10-28 05:11:24.656627] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.253 [2024-10-28 05:11:24.660251] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.253 [2024-10-28 05:11:24.669558] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.253 [2024-10-28 05:11:24.669983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.253 [2024-10-28 05:11:24.670021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.253 [2024-10-28 05:11:24.670040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.253 [2024-10-28 05:11:24.670279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.253 [2024-10-28 05:11:24.670522] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.253 [2024-10-28 05:11:24.670546] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.253 [2024-10-28 05:11:24.670563] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.253 [2024-10-28 05:11:24.674124] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.253 [2024-10-28 05:11:24.683557] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.253 [2024-10-28 05:11:24.683991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.253 [2024-10-28 05:11:24.684023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.253 [2024-10-28 05:11:24.684041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.253 [2024-10-28 05:11:24.684279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.253 [2024-10-28 05:11:24.684522] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.253 [2024-10-28 05:11:24.684547] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.253 [2024-10-28 05:11:24.684563] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.253 [2024-10-28 05:11:24.688127] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.253 [2024-10-28 05:11:24.697560] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.253 [2024-10-28 05:11:24.697983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.253 [2024-10-28 05:11:24.698016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.253 [2024-10-28 05:11:24.698034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.253 [2024-10-28 05:11:24.698272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.253 [2024-10-28 05:11:24.698515] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.253 [2024-10-28 05:11:24.698539] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.253 [2024-10-28 05:11:24.698555] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.253 [2024-10-28 05:11:24.702119] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.253 [2024-10-28 05:11:24.711552] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.253 [2024-10-28 05:11:24.711977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.253 [2024-10-28 05:11:24.712009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.253 [2024-10-28 05:11:24.712027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.253 [2024-10-28 05:11:24.712271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.253 [2024-10-28 05:11:24.712514] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.253 [2024-10-28 05:11:24.712539] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.253 [2024-10-28 05:11:24.712555] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.253 [2024-10-28 05:11:24.716119] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.253 [2024-10-28 05:11:24.725347] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.253 [2024-10-28 05:11:24.725770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.253 [2024-10-28 05:11:24.725803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.253 [2024-10-28 05:11:24.725821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.253 [2024-10-28 05:11:24.726060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.253 [2024-10-28 05:11:24.726303] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.253 [2024-10-28 05:11:24.726328] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.253 [2024-10-28 05:11:24.726344] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.253 [2024-10-28 05:11:24.729925] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.253 [2024-10-28 05:11:24.739159] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.253 [2024-10-28 05:11:24.739551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.253 [2024-10-28 05:11:24.739583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.253 [2024-10-28 05:11:24.739601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.253 [2024-10-28 05:11:24.739851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.253 [2024-10-28 05:11:24.740095] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.253 [2024-10-28 05:11:24.740121] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.253 [2024-10-28 05:11:24.740137] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.253 [2024-10-28 05:11:24.743693] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.253 [2024-10-28 05:11:24.753124] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.253 [2024-10-28 05:11:24.753534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.253 [2024-10-28 05:11:24.753566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.253 [2024-10-28 05:11:24.753584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.253 [2024-10-28 05:11:24.753833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.253 [2024-10-28 05:11:24.754077] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.253 [2024-10-28 05:11:24.754101] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.253 [2024-10-28 05:11:24.754127] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.253 [2024-10-28 05:11:24.757686] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.253 [2024-10-28 05:11:24.767112] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.253 [2024-10-28 05:11:24.767502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.253 [2024-10-28 05:11:24.767534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.253 [2024-10-28 05:11:24.767553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.253 [2024-10-28 05:11:24.767803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.253 [2024-10-28 05:11:24.768046] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.253 [2024-10-28 05:11:24.768071] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.253 [2024-10-28 05:11:24.768087] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.253 [2024-10-28 05:11:24.771646] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.253 [2024-10-28 05:11:24.781079] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.253 [2024-10-28 05:11:24.781493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.253 [2024-10-28 05:11:24.781525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.253 [2024-10-28 05:11:24.781543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.253 [2024-10-28 05:11:24.781793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.253 [2024-10-28 05:11:24.782038] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.253 [2024-10-28 05:11:24.782062] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.253 [2024-10-28 05:11:24.782078] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.253 [2024-10-28 05:11:24.785629] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.253 [2024-10-28 05:11:24.795076] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.253 [2024-10-28 05:11:24.795474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.253 [2024-10-28 05:11:24.795506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.253 [2024-10-28 05:11:24.795524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.253 [2024-10-28 05:11:24.795774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.253 [2024-10-28 05:11:24.796018] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.254 [2024-10-28 05:11:24.796043] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.254 [2024-10-28 05:11:24.796059] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.254 [2024-10-28 05:11:24.799611] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.254 [2024-10-28 05:11:24.809052] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.254 [2024-10-28 05:11:24.809473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.254 [2024-10-28 05:11:24.809501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.254 [2024-10-28 05:11:24.809516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.254 [2024-10-28 05:11:24.809779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.254 [2024-10-28 05:11:24.810023] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.254 [2024-10-28 05:11:24.810047] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.254 [2024-10-28 05:11:24.810064] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.254 [2024-10-28 05:11:24.813620] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.254 [2024-10-28 05:11:24.822854] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.254 [2024-10-28 05:11:24.823275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.254 [2024-10-28 05:11:24.823303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.254 [2024-10-28 05:11:24.823319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.254 [2024-10-28 05:11:24.823564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.254 [2024-10-28 05:11:24.823818] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.254 [2024-10-28 05:11:24.823844] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.254 [2024-10-28 05:11:24.823860] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.254 [2024-10-28 05:11:24.827409] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.254 [2024-10-28 05:11:24.836667] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.254 [2024-10-28 05:11:24.837089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.254 [2024-10-28 05:11:24.837121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.254 [2024-10-28 05:11:24.837139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.254 [2024-10-28 05:11:24.837377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.254 [2024-10-28 05:11:24.837620] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.254 [2024-10-28 05:11:24.837654] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.254 [2024-10-28 05:11:24.837671] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.254 [2024-10-28 05:11:24.841222] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.513 [2024-10-28 05:11:24.850663] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.513 [2024-10-28 05:11:24.851076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-10-28 05:11:24.851108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.513 [2024-10-28 05:11:24.851132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.513 [2024-10-28 05:11:24.851370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.513 [2024-10-28 05:11:24.851613] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.513 [2024-10-28 05:11:24.851649] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.513 [2024-10-28 05:11:24.851668] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.513 [2024-10-28 05:11:24.855221] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.513 [2024-10-28 05:11:24.864656] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.513 [2024-10-28 05:11:24.865114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-10-28 05:11:24.865146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.513 [2024-10-28 05:11:24.865164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.513 [2024-10-28 05:11:24.865402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.513 [2024-10-28 05:11:24.865657] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.513 [2024-10-28 05:11:24.865683] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.513 [2024-10-28 05:11:24.865699] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.513 [2024-10-28 05:11:24.869249] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.513 [2024-10-28 05:11:24.878472] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.513 [2024-10-28 05:11:24.878904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-10-28 05:11:24.878933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.513 [2024-10-28 05:11:24.878950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.513 [2024-10-28 05:11:24.879213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.513 [2024-10-28 05:11:24.879456] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.513 [2024-10-28 05:11:24.879480] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.513 [2024-10-28 05:11:24.879497] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.513 [2024-10-28 05:11:24.883057] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.513 [2024-10-28 05:11:24.892284] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.513 [2024-10-28 05:11:24.892764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-10-28 05:11:24.892793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.513 [2024-10-28 05:11:24.892810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.513 [2024-10-28 05:11:24.893067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.513 [2024-10-28 05:11:24.893316] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.513 [2024-10-28 05:11:24.893340] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.513 [2024-10-28 05:11:24.893356] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.513 [2024-10-28 05:11:24.896918] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.513 [2024-10-28 05:11:24.906146] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.513 [2024-10-28 05:11:24.906534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-10-28 05:11:24.906566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.513 [2024-10-28 05:11:24.906585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.513 [2024-10-28 05:11:24.906835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.513 [2024-10-28 05:11:24.907078] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.513 [2024-10-28 05:11:24.907103] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.513 [2024-10-28 05:11:24.907119] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.513 [2024-10-28 05:11:24.910750] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.513 [2024-10-28 05:11:24.920073] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.513 [2024-10-28 05:11:24.920490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-10-28 05:11:24.920523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.513 [2024-10-28 05:11:24.920541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.513 [2024-10-28 05:11:24.920790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.513 [2024-10-28 05:11:24.921034] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.513 [2024-10-28 05:11:24.921059] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.513 [2024-10-28 05:11:24.921075] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.514 [2024-10-28 05:11:24.924625] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.514 [2024-10-28 05:11:24.933875] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.514 [2024-10-28 05:11:24.934370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-10-28 05:11:24.934402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.514 [2024-10-28 05:11:24.934421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.514 [2024-10-28 05:11:24.934668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.514 [2024-10-28 05:11:24.934911] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.514 [2024-10-28 05:11:24.934936] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.514 [2024-10-28 05:11:24.934958] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.514 [2024-10-28 05:11:24.938513] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.514 [2024-10-28 05:11:24.947744] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.514 [2024-10-28 05:11:24.948167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-10-28 05:11:24.948198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.514 [2024-10-28 05:11:24.948216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.514 [2024-10-28 05:11:24.948454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.514 [2024-10-28 05:11:24.948710] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.514 [2024-10-28 05:11:24.948735] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.514 [2024-10-28 05:11:24.948752] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.514 [2024-10-28 05:11:24.952304] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.514 [2024-10-28 05:11:24.961530] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.514 [2024-10-28 05:11:24.961927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-10-28 05:11:24.961959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.514 [2024-10-28 05:11:24.961977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.514 [2024-10-28 05:11:24.962215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.514 [2024-10-28 05:11:24.962458] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.514 [2024-10-28 05:11:24.962482] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.514 [2024-10-28 05:11:24.962499] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.514 [2024-10-28 05:11:24.966061] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.514 [2024-10-28 05:11:24.975499] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.514 [2024-10-28 05:11:24.975923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-10-28 05:11:24.975955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.514 [2024-10-28 05:11:24.975973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.514 [2024-10-28 05:11:24.976210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.514 [2024-10-28 05:11:24.976454] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.514 [2024-10-28 05:11:24.976478] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.514 [2024-10-28 05:11:24.976494] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.514 [2024-10-28 05:11:24.980058] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.514 [2024-10-28 05:11:24.989495] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.514 [2024-10-28 05:11:24.989916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-10-28 05:11:24.989948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.514 [2024-10-28 05:11:24.989966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.514 [2024-10-28 05:11:24.990204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.514 [2024-10-28 05:11:24.990446] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.514 [2024-10-28 05:11:24.990471] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.514 [2024-10-28 05:11:24.990487] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.514 [2024-10-28 05:11:24.994065] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.514 [2024-10-28 05:11:25.003292] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.514 [2024-10-28 05:11:25.003723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-10-28 05:11:25.003756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.514 [2024-10-28 05:11:25.003774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.514 [2024-10-28 05:11:25.004012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.514 [2024-10-28 05:11:25.004255] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.514 [2024-10-28 05:11:25.004280] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.514 [2024-10-28 05:11:25.004297] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.514 [2024-10-28 05:11:25.007864] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.514 [2024-10-28 05:11:25.017105] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.514 [2024-10-28 05:11:25.017530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-10-28 05:11:25.017562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.514 [2024-10-28 05:11:25.017581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.514 [2024-10-28 05:11:25.017829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.514 [2024-10-28 05:11:25.018073] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.514 [2024-10-28 05:11:25.018098] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.514 [2024-10-28 05:11:25.018114] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.514 [2024-10-28 05:11:25.021671] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.514 [2024-10-28 05:11:25.031120] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.514 [2024-10-28 05:11:25.031542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-10-28 05:11:25.031574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.514 [2024-10-28 05:11:25.031604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.514 [2024-10-28 05:11:25.031852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.514 [2024-10-28 05:11:25.032095] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.514 [2024-10-28 05:11:25.032120] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.514 [2024-10-28 05:11:25.032136] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.514 [2024-10-28 05:11:25.035696] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.514 [2024-10-28 05:11:25.044922] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.514 [2024-10-28 05:11:25.045318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-10-28 05:11:25.045356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.514 [2024-10-28 05:11:25.045372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.514 [2024-10-28 05:11:25.045608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.514 [2024-10-28 05:11:25.045869] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.514 [2024-10-28 05:11:25.045894] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.514 [2024-10-28 05:11:25.045910] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.514 [2024-10-28 05:11:25.049469] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.514 [2024-10-28 05:11:25.058923] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.514 [2024-10-28 05:11:25.059340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-10-28 05:11:25.059379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.514 [2024-10-28 05:11:25.059398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.514 [2024-10-28 05:11:25.059652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.514 [2024-10-28 05:11:25.059896] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.514 [2024-10-28 05:11:25.059931] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.514 [2024-10-28 05:11:25.059947] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.514 [2024-10-28 05:11:25.063499] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.514 [2024-10-28 05:11:25.072806] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.514 [2024-10-28 05:11:25.073237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-10-28 05:11:25.073269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.515 [2024-10-28 05:11:25.073287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.515 [2024-10-28 05:11:25.073524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.515 [2024-10-28 05:11:25.073788] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.515 [2024-10-28 05:11:25.073812] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.515 [2024-10-28 05:11:25.073827] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.515 [2024-10-28 05:11:25.077410] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.515 [2024-10-28 05:11:25.086734] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.515 [2024-10-28 05:11:25.087203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-10-28 05:11:25.087250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.515 [2024-10-28 05:11:25.087268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.515 [2024-10-28 05:11:25.087505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.515 [2024-10-28 05:11:25.087765] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.515 [2024-10-28 05:11:25.087788] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.515 [2024-10-28 05:11:25.087803] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.515 [2024-10-28 05:11:25.091401] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.515 [2024-10-28 05:11:25.100648] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.515 [2024-10-28 05:11:25.101056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-10-28 05:11:25.101090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.515 [2024-10-28 05:11:25.101106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.515 [2024-10-28 05:11:25.101351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.515 [2024-10-28 05:11:25.101594] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.515 [2024-10-28 05:11:25.101630] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.515 [2024-10-28 05:11:25.101655] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.515 [2024-10-28 05:11:25.105230] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.775 [2024-10-28 05:11:25.114470] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.775 [2024-10-28 05:11:25.114918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.775 [2024-10-28 05:11:25.114968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.775 [2024-10-28 05:11:25.114984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.775 [2024-10-28 05:11:25.115241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.775 [2024-10-28 05:11:25.115483] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.775 [2024-10-28 05:11:25.115517] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.775 [2024-10-28 05:11:25.115539] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.775 [2024-10-28 05:11:25.119099] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.775 [2024-10-28 05:11:25.128335] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.775 [2024-10-28 05:11:25.128728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.775 [2024-10-28 05:11:25.128770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.775 [2024-10-28 05:11:25.128788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.775 [2024-10-28 05:11:25.129055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.775 [2024-10-28 05:11:25.129299] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.775 [2024-10-28 05:11:25.129323] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.775 [2024-10-28 05:11:25.129340] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.775 [2024-10-28 05:11:25.132906] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2476676 Killed "${NVMF_APP[@]}" "$@" 00:35:34.775 05:11:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:34.775 05:11:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:34.775 05:11:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:34.775 05:11:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:34.775 05:11:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:34.775 05:11:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=2477712 00:35:34.775 05:11:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:34.775 05:11:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 2477712 00:35:34.775 05:11:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2477712 ']' 00:35:34.775 05:11:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.775 05:11:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:34.775 05:11:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.775 05:11:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:34.775 05:11:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:34.775 [2024-10-28 05:11:25.142150] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.775 [2024-10-28 05:11:25.142586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.775 [2024-10-28 05:11:25.142618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.775 [2024-10-28 05:11:25.142654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.775 [2024-10-28 05:11:25.142894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.775 [2024-10-28 05:11:25.143141] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.776 [2024-10-28 05:11:25.143171] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.776 [2024-10-28 05:11:25.143187] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.776 [2024-10-28 05:11:25.146749] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.776 [2024-10-28 05:11:25.155997] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.776 [2024-10-28 05:11:25.156397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.776 [2024-10-28 05:11:25.156438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.776 [2024-10-28 05:11:25.156456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.776 [2024-10-28 05:11:25.156703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.776 [2024-10-28 05:11:25.156938] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.776 [2024-10-28 05:11:25.156972] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.776 [2024-10-28 05:11:25.156989] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.776 [2024-10-28 05:11:25.160465] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.776 [2024-10-28 05:11:25.169406] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.776 [2024-10-28 05:11:25.169820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.776 [2024-10-28 05:11:25.169850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.776 [2024-10-28 05:11:25.169866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.776 [2024-10-28 05:11:25.170118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.776 [2024-10-28 05:11:25.170318] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.776 [2024-10-28 05:11:25.170338] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.776 [2024-10-28 05:11:25.170352] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.776 [2024-10-28 05:11:25.173428] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.776 [2024-10-28 05:11:25.182729] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.776 [2024-10-28 05:11:25.183160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.776 [2024-10-28 05:11:25.183190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.776 [2024-10-28 05:11:25.183210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.776 [2024-10-28 05:11:25.183444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.776 [2024-10-28 05:11:25.183693] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.776 [2024-10-28 05:11:25.183715] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.776 [2024-10-28 05:11:25.183729] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.776 [2024-10-28 05:11:25.186699] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.776 [2024-10-28 05:11:25.189814] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:35:34.776 [2024-10-28 05:11:25.189871] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:34.776 [2024-10-28 05:11:25.195988] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.776 [2024-10-28 05:11:25.196349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.776 [2024-10-28 05:11:25.196377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.776 [2024-10-28 05:11:25.196393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.776 [2024-10-28 05:11:25.196628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.776 [2024-10-28 05:11:25.196856] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.776 [2024-10-28 05:11:25.196877] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.776 [2024-10-28 05:11:25.196890] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.776 [2024-10-28 05:11:25.199945] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.776 [2024-10-28 05:11:25.209169] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.776 [2024-10-28 05:11:25.209574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.776 [2024-10-28 05:11:25.209601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.776 [2024-10-28 05:11:25.209617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.776 [2024-10-28 05:11:25.209886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.776 [2024-10-28 05:11:25.210096] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.776 [2024-10-28 05:11:25.210116] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.776 [2024-10-28 05:11:25.210129] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.776 [2024-10-28 05:11:25.213099] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.776 [2024-10-28 05:11:25.222369] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.776 [2024-10-28 05:11:25.222779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.776 [2024-10-28 05:11:25.222809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.776 [2024-10-28 05:11:25.222825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.776 [2024-10-28 05:11:25.223074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.776 [2024-10-28 05:11:25.223268] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.776 [2024-10-28 05:11:25.223287] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.776 [2024-10-28 05:11:25.223303] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.776 [2024-10-28 05:11:25.226263] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.776 [2024-10-28 05:11:25.236119] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.776 [2024-10-28 05:11:25.236553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.776 [2024-10-28 05:11:25.236596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.776 [2024-10-28 05:11:25.236614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.776 [2024-10-28 05:11:25.236888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.776 [2024-10-28 05:11:25.237142] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.776 [2024-10-28 05:11:25.237167] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.776 [2024-10-28 05:11:25.237182] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.776 [2024-10-28 05:11:25.240696] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.776 [2024-10-28 05:11:25.250082] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.776 [2024-10-28 05:11:25.250500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.776 [2024-10-28 05:11:25.250534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.776 [2024-10-28 05:11:25.250552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.776 [2024-10-28 05:11:25.250811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.776 [2024-10-28 05:11:25.251056] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.776 [2024-10-28 05:11:25.251080] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.776 [2024-10-28 05:11:25.251097] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.776 [2024-10-28 05:11:25.254653] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.776 [2024-10-28 05:11:25.263880] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.776 [2024-10-28 05:11:25.264274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.776 [2024-10-28 05:11:25.264305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.776 [2024-10-28 05:11:25.264323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.776 [2024-10-28 05:11:25.264560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.776 [2024-10-28 05:11:25.264814] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.776 [2024-10-28 05:11:25.264840] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.776 [2024-10-28 05:11:25.264856] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.776 [2024-10-28 05:11:25.268406] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.776 [2024-10-28 05:11:25.277842] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.776 [2024-10-28 05:11:25.278264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.776 [2024-10-28 05:11:25.278297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.776 [2024-10-28 05:11:25.278314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.776 [2024-10-28 05:11:25.278558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.776 [2024-10-28 05:11:25.278810] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.777 [2024-10-28 05:11:25.278835] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.777 [2024-10-28 05:11:25.278851] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.777 [2024-10-28 05:11:25.282405] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.777 [2024-10-28 05:11:25.291672] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.777 [2024-10-28 05:11:25.292094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.777 [2024-10-28 05:11:25.292125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.777 [2024-10-28 05:11:25.292152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.777 [2024-10-28 05:11:25.292389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.777 [2024-10-28 05:11:25.292641] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.777 [2024-10-28 05:11:25.292665] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.777 [2024-10-28 05:11:25.292682] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.777 [2024-10-28 05:11:25.296232] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.777 [2024-10-28 05:11:25.305646] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.777 [2024-10-28 05:11:25.306188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.777 [2024-10-28 05:11:25.306239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.777 [2024-10-28 05:11:25.306258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.777 [2024-10-28 05:11:25.306496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.777 [2024-10-28 05:11:25.306749] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.777 [2024-10-28 05:11:25.306774] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.777 [2024-10-28 05:11:25.306791] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.777 [2024-10-28 05:11:25.310338] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.777 [2024-10-28 05:11:25.319576] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.777 [2024-10-28 05:11:25.320011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.777 [2024-10-28 05:11:25.320038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.777 [2024-10-28 05:11:25.320054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.777 [2024-10-28 05:11:25.320288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.777 [2024-10-28 05:11:25.320532] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.777 [2024-10-28 05:11:25.320557] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.777 [2024-10-28 05:11:25.320573] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.777 [2024-10-28 05:11:25.324129] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.777 [2024-10-28 05:11:25.329370] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:34.777 [2024-10-28 05:11:25.333315] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.777 [2024-10-28 05:11:25.333745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.777 [2024-10-28 05:11:25.333775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.777 [2024-10-28 05:11:25.333801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.777 [2024-10-28 05:11:25.334053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.777 [2024-10-28 05:11:25.334296] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.777 [2024-10-28 05:11:25.334320] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.777 [2024-10-28 05:11:25.334337] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.777 [2024-10-28 05:11:25.337838] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.777 [2024-10-28 05:11:25.347161] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.777 [2024-10-28 05:11:25.347583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.777 [2024-10-28 05:11:25.347615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.777 [2024-10-28 05:11:25.347656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.777 [2024-10-28 05:11:25.347917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.777 [2024-10-28 05:11:25.348172] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.777 [2024-10-28 05:11:25.348197] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.777 [2024-10-28 05:11:25.348213] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.777 [2024-10-28 05:11:25.351710] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:34.777 [2024-10-28 05:11:25.360946] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:34.777 [2024-10-28 05:11:25.361360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.777 [2024-10-28 05:11:25.361392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:34.777 [2024-10-28 05:11:25.361422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:34.777 [2024-10-28 05:11:25.361684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:34.777 [2024-10-28 05:11:25.361889] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:34.777 [2024-10-28 05:11:25.361924] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:34.777 [2024-10-28 05:11:25.361941] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:34.777 [2024-10-28 05:11:25.365429] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.037 [2024-10-28 05:11:25.369248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:35.037 [2024-10-28 05:11:25.374779] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.037 [2024-10-28 05:11:25.375302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.037 [2024-10-28 05:11:25.375361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.037 [2024-10-28 05:11:25.375381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.037 [2024-10-28 05:11:25.375642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.037 [2024-10-28 05:11:25.375884] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.037 [2024-10-28 05:11:25.375923] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.037 [2024-10-28 05:11:25.375947] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.037 [2024-10-28 05:11:25.379583] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.037 [2024-10-28 05:11:25.388737] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.037 [2024-10-28 05:11:25.389278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.037 [2024-10-28 05:11:25.389328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.037 [2024-10-28 05:11:25.389349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.037 [2024-10-28 05:11:25.389594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.037 [2024-10-28 05:11:25.389859] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.037 [2024-10-28 05:11:25.389883] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.037 [2024-10-28 05:11:25.389900] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.037 [2024-10-28 05:11:25.393496] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.037 [2024-10-28 05:11:25.402673] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.037 [2024-10-28 05:11:25.403131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.037 [2024-10-28 05:11:25.403161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.037 [2024-10-28 05:11:25.403178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.037 [2024-10-28 05:11:25.403429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.037 [2024-10-28 05:11:25.403703] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.037 [2024-10-28 05:11:25.403738] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.037 [2024-10-28 05:11:25.403755] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.037 [2024-10-28 05:11:25.407370] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.037 [2024-10-28 05:11:25.416724] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.037 [2024-10-28 05:11:25.417922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.037 [2024-10-28 05:11:25.417968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.037 [2024-10-28 05:11:25.417988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.037 [2024-10-28 05:11:25.418241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.037 [2024-10-28 05:11:25.418486] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.037 [2024-10-28 05:11:25.418511] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.037 [2024-10-28 05:11:25.418528] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.037 [2024-10-28 05:11:25.422112] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.037 [2024-10-28 05:11:25.423273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.037 [2024-10-28 05:11:25.423310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.037 [2024-10-28 05:11:25.423334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:35.037 [2024-10-28 05:11:25.423347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:35.038 [2024-10-28 05:11:25.423358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.038 [2024-10-28 05:11:25.424901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:35.038 [2024-10-28 05:11:25.424964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:35.038 [2024-10-28 05:11:25.424968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.038 [2024-10-28 05:11:25.430302] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.038 [2024-10-28 05:11:25.430816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.038 [2024-10-28 05:11:25.430851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.038 [2024-10-28 05:11:25.430870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.038 [2024-10-28 05:11:25.431121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.038 [2024-10-28 05:11:25.431332] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.038 [2024-10-28 05:11:25.431366] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.038 [2024-10-28 05:11:25.431380] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.038 [2024-10-28 05:11:25.434588] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.038 [2024-10-28 05:11:25.443773] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.038 [2024-10-28 05:11:25.444397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.038 [2024-10-28 05:11:25.444445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.038 [2024-10-28 05:11:25.444489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.038 [2024-10-28 05:11:25.444751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.038 [2024-10-28 05:11:25.445004] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.038 [2024-10-28 05:11:25.445026] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.038 [2024-10-28 05:11:25.445042] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.038 [2024-10-28 05:11:25.448270] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.038 [2024-10-28 05:11:25.457422] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.038 [2024-10-28 05:11:25.457957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.038 [2024-10-28 05:11:25.457996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.038 [2024-10-28 05:11:25.458023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.038 [2024-10-28 05:11:25.458287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.038 [2024-10-28 05:11:25.458496] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.038 [2024-10-28 05:11:25.458518] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.038 [2024-10-28 05:11:25.458533] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.038 [2024-10-28 05:11:25.461777] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.038 [2024-10-28 05:11:25.471010] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.038 [2024-10-28 05:11:25.471590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.038 [2024-10-28 05:11:25.471649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.038 [2024-10-28 05:11:25.471670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.038 [2024-10-28 05:11:25.471908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.038 [2024-10-28 05:11:25.472136] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.038 [2024-10-28 05:11:25.472158] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.038 [2024-10-28 05:11:25.472174] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.038 [2024-10-28 05:11:25.475365] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.038 [2024-10-28 05:11:25.484565] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.038 [2024-10-28 05:11:25.485089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.038 [2024-10-28 05:11:25.485133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.038 [2024-10-28 05:11:25.485151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.038 [2024-10-28 05:11:25.485403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.038 [2024-10-28 05:11:25.485651] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.038 [2024-10-28 05:11:25.485674] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.038 [2024-10-28 05:11:25.485705] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.038 [2024-10-28 05:11:25.488875] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.038 [2024-10-28 05:11:25.498057] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.038 [2024-10-28 05:11:25.498695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.038 [2024-10-28 05:11:25.498737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.038 [2024-10-28 05:11:25.498756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.038 [2024-10-28 05:11:25.499007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.038 [2024-10-28 05:11:25.499226] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.038 [2024-10-28 05:11:25.499248] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.038 [2024-10-28 05:11:25.499263] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.038 [2024-10-28 05:11:25.502432] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.038 [2024-10-28 05:11:25.511604] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.038 [2024-10-28 05:11:25.512072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.038 [2024-10-28 05:11:25.512105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.038 [2024-10-28 05:11:25.512134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.038 [2024-10-28 05:11:25.512379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.038 [2024-10-28 05:11:25.512586] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.038 [2024-10-28 05:11:25.512608] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.038 [2024-10-28 05:11:25.512631] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.038 [2024-10-28 05:11:25.515796] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.038 [2024-10-28 05:11:25.525125] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.038 [2024-10-28 05:11:25.525570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.038 [2024-10-28 05:11:25.525601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.038 [2024-10-28 05:11:25.525619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.038 [2024-10-28 05:11:25.525856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.038 [2024-10-28 05:11:25.526080] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.038 [2024-10-28 05:11:25.526101] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.038 [2024-10-28 05:11:25.526123] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.038 [2024-10-28 05:11:25.529326] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.038 [2024-10-28 05:11:25.538666] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.038 [2024-10-28 05:11:25.539080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.038 [2024-10-28 05:11:25.539110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.038 [2024-10-28 05:11:25.539137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.038 [2024-10-28 05:11:25.539377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.038 [2024-10-28 05:11:25.539582] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.038 [2024-10-28 05:11:25.539603] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.038 [2024-10-28 05:11:25.539616] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.038 [2024-10-28 05:11:25.542782] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.038 [2024-10-28 05:11:25.552115] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.038 [2024-10-28 05:11:25.552467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.038 [2024-10-28 05:11:25.552496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.038 [2024-10-28 05:11:25.552513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.038 [2024-10-28 05:11:25.552737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.038 [2024-10-28 05:11:25.552972] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.038 [2024-10-28 05:11:25.553008] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.038 [2024-10-28 05:11:25.553022] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.038 [2024-10-28 05:11:25.556183] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.038 [2024-10-28 05:11:25.565654] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.039 [2024-10-28 05:11:25.566007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.039 [2024-10-28 05:11:25.566036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.039 [2024-10-28 05:11:25.566053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.039 [2024-10-28 05:11:25.566284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.039 [2024-10-28 05:11:25.566506] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.039 [2024-10-28 05:11:25.566527] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.039 [2024-10-28 05:11:25.566541] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.039 [2024-10-28 05:11:25.569692] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.039 [2024-10-28 05:11:25.579200] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.039 [2024-10-28 05:11:25.579553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.039 [2024-10-28 05:11:25.579581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.039 [2024-10-28 05:11:25.579598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.039 [2024-10-28 05:11:25.579837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.039 [2024-10-28 05:11:25.580061] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.039 [2024-10-28 05:11:25.580082] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.039 [2024-10-28 05:11:25.580096] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.039 [2024-10-28 05:11:25.583252] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.039 [2024-10-28 05:11:25.592564] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.039 [2024-10-28 05:11:25.592958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.039 [2024-10-28 05:11:25.592999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.039 [2024-10-28 05:11:25.593016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.039 [2024-10-28 05:11:25.593261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.039 [2024-10-28 05:11:25.593467] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.039 [2024-10-28 05:11:25.593487] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.039 [2024-10-28 05:11:25.593501] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.039 [2024-10-28 05:11:25.596595] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.039 [2024-10-28 05:11:25.606104] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.039 [2024-10-28 05:11:25.606539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.039 [2024-10-28 05:11:25.606568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.039 [2024-10-28 05:11:25.606584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.039 [2024-10-28 05:11:25.606807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.039 [2024-10-28 05:11:25.607036] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.039 [2024-10-28 05:11:25.607057] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.039 [2024-10-28 05:11:25.607070] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.039 [2024-10-28 05:11:25.610185] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.039 [2024-10-28 05:11:25.619473] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.039 [2024-10-28 05:11:25.619835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.039 [2024-10-28 05:11:25.619865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.039 [2024-10-28 05:11:25.619893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.039 [2024-10-28 05:11:25.620135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.039 [2024-10-28 05:11:25.620342] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.039 [2024-10-28 05:11:25.620363] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.039 [2024-10-28 05:11:25.620376] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.039 [2024-10-28 05:11:25.623535] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.298 [2024-10-28 05:11:25.633033] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.298 [2024-10-28 05:11:25.633406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.298 [2024-10-28 05:11:25.633445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.299 [2024-10-28 05:11:25.633462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.299 3504.17 IOPS, 13.69 MiB/s [2024-10-28T04:11:25.895Z] [2024-10-28 05:11:25.635267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.299 [2024-10-28 05:11:25.635509] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.299 [2024-10-28 05:11:25.635532] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.299 [2024-10-28 05:11:25.635547] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.299 [2024-10-28 05:11:25.638770] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.299 [2024-10-28 05:11:25.646432] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.299 [2024-10-28 05:11:25.646811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.299 [2024-10-28 05:11:25.646847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.299 [2024-10-28 05:11:25.646864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.299 [2024-10-28 05:11:25.647092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.299 [2024-10-28 05:11:25.647314] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.299 [2024-10-28 05:11:25.647334] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.299 [2024-10-28 05:11:25.647348] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.299 [2024-10-28 05:11:25.650499] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.299 [2024-10-28 05:11:25.659836] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.299 [2024-10-28 05:11:25.660210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.299 [2024-10-28 05:11:25.660241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.299 [2024-10-28 05:11:25.660258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.299 [2024-10-28 05:11:25.660498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.299 [2024-10-28 05:11:25.660758] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.299 [2024-10-28 05:11:25.660782] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.299 [2024-10-28 05:11:25.660797] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.299 [2024-10-28 05:11:25.664141] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.299 [2024-10-28 05:11:25.673312] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.299 [2024-10-28 05:11:25.673723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.299 [2024-10-28 05:11:25.673753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.299 [2024-10-28 05:11:25.673770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.299 [2024-10-28 05:11:25.674014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.299 [2024-10-28 05:11:25.674219] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.299 [2024-10-28 05:11:25.674241] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.299 [2024-10-28 05:11:25.674254] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.299 [2024-10-28 05:11:25.677441] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.299 [2024-10-28 05:11:25.686755] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.299 [2024-10-28 05:11:25.687132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.299 [2024-10-28 05:11:25.687163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.299 [2024-10-28 05:11:25.687180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.299 [2024-10-28 05:11:25.687408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.299 [2024-10-28 05:11:25.687632] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.299 [2024-10-28 05:11:25.687664] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.299 [2024-10-28 05:11:25.687679] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.299 [2024-10-28 05:11:25.690831] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.299 [2024-10-28 05:11:25.700136] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.299 [2024-10-28 05:11:25.700574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.299 [2024-10-28 05:11:25.700604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.299 [2024-10-28 05:11:25.700620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.299 [2024-10-28 05:11:25.700859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.299 [2024-10-28 05:11:25.701083] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.299 [2024-10-28 05:11:25.701105] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.299 [2024-10-28 05:11:25.701124] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.299 [2024-10-28 05:11:25.704286] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.299 [2024-10-28 05:11:25.713572] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.299 [2024-10-28 05:11:25.713973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.299 [2024-10-28 05:11:25.714003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.299 [2024-10-28 05:11:25.714021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.299 [2024-10-28 05:11:25.714265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.299 [2024-10-28 05:11:25.714472] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.299 [2024-10-28 05:11:25.714494] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.299 [2024-10-28 05:11:25.714508] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.299 [2024-10-28 05:11:25.717695] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.299 [2024-10-28 05:11:25.727134] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.299 [2024-10-28 05:11:25.727528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.299 [2024-10-28 05:11:25.727558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.299 [2024-10-28 05:11:25.727575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.299 [2024-10-28 05:11:25.727800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.299 [2024-10-28 05:11:25.728028] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.299 [2024-10-28 05:11:25.728050] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.299 [2024-10-28 05:11:25.728065] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.299 [2024-10-28 05:11:25.731198] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.299 [2024-10-28 05:11:25.740500] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.299 [2024-10-28 05:11:25.740863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.299 [2024-10-28 05:11:25.740895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.299 [2024-10-28 05:11:25.740912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.299 [2024-10-28 05:11:25.741156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.299 [2024-10-28 05:11:25.741364] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.299 [2024-10-28 05:11:25.741386] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.299 [2024-10-28 05:11:25.741399] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.299 [2024-10-28 05:11:25.744549] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.299 [2024-10-28 05:11:25.754033] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.299 [2024-10-28 05:11:25.754424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.299 [2024-10-28 05:11:25.754454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.299 [2024-10-28 05:11:25.754471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.299 [2024-10-28 05:11:25.754711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.299 [2024-10-28 05:11:25.754923] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.299 [2024-10-28 05:11:25.754961] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.299 [2024-10-28 05:11:25.754976] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.299 [2024-10-28 05:11:25.758181] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.299 [2024-10-28 05:11:25.767447] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.299 [2024-10-28 05:11:25.767840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.300 [2024-10-28 05:11:25.767870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.300 [2024-10-28 05:11:25.767887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.300 [2024-10-28 05:11:25.768128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.300 [2024-10-28 05:11:25.768335] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.300 [2024-10-28 05:11:25.768358] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.300 [2024-10-28 05:11:25.768371] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.300 [2024-10-28 05:11:25.771493] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.300 [2024-10-28 05:11:25.781012] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.300 [2024-10-28 05:11:25.781343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.300 [2024-10-28 05:11:25.781374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.300 [2024-10-28 05:11:25.781391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.300 [2024-10-28 05:11:25.781621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.300 [2024-10-28 05:11:25.781844] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.300 [2024-10-28 05:11:25.781867] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.300 [2024-10-28 05:11:25.781881] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.300 [2024-10-28 05:11:25.785033] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.300 [2024-10-28 05:11:25.794490] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.300 [2024-10-28 05:11:25.794868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.300 [2024-10-28 05:11:25.794899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.300 [2024-10-28 05:11:25.794921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.300 [2024-10-28 05:11:25.795150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.300 [2024-10-28 05:11:25.795373] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.300 [2024-10-28 05:11:25.795396] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.300 [2024-10-28 05:11:25.795410] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.300 [2024-10-28 05:11:25.798557] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.300 [2024-10-28 05:11:25.807921] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.300 [2024-10-28 05:11:25.808283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.300 [2024-10-28 05:11:25.808313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.300 [2024-10-28 05:11:25.808330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.300 [2024-10-28 05:11:25.808559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.300 [2024-10-28 05:11:25.808812] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.300 [2024-10-28 05:11:25.808837] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.300 [2024-10-28 05:11:25.808851] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.300 [2024-10-28 05:11:25.812032] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.300 [2024-10-28 05:11:25.821300] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.300 [2024-10-28 05:11:25.821705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.300 [2024-10-28 05:11:25.821736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.300 [2024-10-28 05:11:25.821754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.300 [2024-10-28 05:11:25.821983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.300 [2024-10-28 05:11:25.822205] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.300 [2024-10-28 05:11:25.822227] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.300 [2024-10-28 05:11:25.822242] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.300 [2024-10-28 05:11:25.825403] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.300 [2024-10-28 05:11:25.834804] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.300 [2024-10-28 05:11:25.835238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.300 [2024-10-28 05:11:25.835268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.300 [2024-10-28 05:11:25.835285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.300 [2024-10-28 05:11:25.835529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.300 [2024-10-28 05:11:25.835790] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.300 [2024-10-28 05:11:25.835815] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.300 [2024-10-28 05:11:25.835830] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.300 [2024-10-28 05:11:25.838987] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.300 [2024-10-28 05:11:25.848482] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.300 [2024-10-28 05:11:25.848871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.300 [2024-10-28 05:11:25.848902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.300 [2024-10-28 05:11:25.848919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.300 [2024-10-28 05:11:25.849162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.300 [2024-10-28 05:11:25.849369] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.300 [2024-10-28 05:11:25.849391] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.300 [2024-10-28 05:11:25.849405] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.300 [2024-10-28 05:11:25.852569] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.300 [2024-10-28 05:11:25.861958] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.300 [2024-10-28 05:11:25.862316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.300 [2024-10-28 05:11:25.862346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.300 [2024-10-28 05:11:25.862363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.300 [2024-10-28 05:11:25.862606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.300 [2024-10-28 05:11:25.862845] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.300 [2024-10-28 05:11:25.862869] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.300 [2024-10-28 05:11:25.862883] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.300 [2024-10-28 05:11:25.866029] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.300 [2024-10-28 05:11:25.875453] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.300 [2024-10-28 05:11:25.875845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.300 [2024-10-28 05:11:25.875875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.300 [2024-10-28 05:11:25.875892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.300 [2024-10-28 05:11:25.876137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.300 [2024-10-28 05:11:25.876343] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.300 [2024-10-28 05:11:25.876366] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.300 [2024-10-28 05:11:25.876385] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.300 [2024-10-28 05:11:25.879534] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.300 [2024-10-28 05:11:25.888966] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.300 [2024-10-28 05:11:25.889363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.300 [2024-10-28 05:11:25.889393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.300 [2024-10-28 05:11:25.889410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.300 [2024-10-28 05:11:25.889625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.300 [2024-10-28 05:11:25.889854] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.300 [2024-10-28 05:11:25.889878] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.300 [2024-10-28 05:11:25.889893] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.559 [2024-10-28 05:11:25.893279] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.559 [2024-10-28 05:11:25.902453] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.559 [2024-10-28 05:11:25.902826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-28 05:11:25.902856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.559 [2024-10-28 05:11:25.902874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.559 [2024-10-28 05:11:25.903118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.559 [2024-10-28 05:11:25.903325] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.559 [2024-10-28 05:11:25.903347] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-10-28 05:11:25.903361] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-10-28 05:11:25.906521] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-10-28 05:11:25.915846] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-10-28 05:11:25.916286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-28 05:11:25.916316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-10-28 05:11:25.916333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.560 [2024-10-28 05:11:25.916577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.560 [2024-10-28 05:11:25.916814] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-10-28 05:11:25.916838] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-10-28 05:11:25.916853] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-10-28 05:11:25.920082] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-10-28 05:11:25.929513] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-10-28 05:11:25.929873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-28 05:11:25.929903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-10-28 05:11:25.929920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.560 [2024-10-28 05:11:25.930149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.560 [2024-10-28 05:11:25.930371] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-10-28 05:11:25.930392] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-10-28 05:11:25.930406] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-10-28 05:11:25.933661] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-10-28 05:11:25.943024] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-10-28 05:11:25.943457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-28 05:11:25.943486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-10-28 05:11:25.943508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.560 [2024-10-28 05:11:25.943732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.560 [2024-10-28 05:11:25.943965] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-10-28 05:11:25.943986] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-10-28 05:11:25.944000] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-10-28 05:11:25.947238] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-10-28 05:11:25.956383] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-10-28 05:11:25.956780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-28 05:11:25.956809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-10-28 05:11:25.956825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.560 [2024-10-28 05:11:25.957077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.560 [2024-10-28 05:11:25.957282] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-10-28 05:11:25.957303] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-10-28 05:11:25.957316] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-10-28 05:11:25.960466] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-10-28 05:11:25.969754] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-10-28 05:11:25.970175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-28 05:11:25.970205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-10-28 05:11:25.970237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.560 [2024-10-28 05:11:25.970479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.560 [2024-10-28 05:11:25.970712] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-10-28 05:11:25.970734] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-10-28 05:11:25.970749] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-10-28 05:11:25.973912] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-10-28 05:11:25.983203] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-10-28 05:11:25.983639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-28 05:11:25.983669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-10-28 05:11:25.983695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.560 [2024-10-28 05:11:25.983928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.560 [2024-10-28 05:11:25.984151] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-10-28 05:11:25.984172] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-10-28 05:11:25.984186] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-10-28 05:11:25.987346] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-10-28 05:11:25.996629] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-10-28 05:11:25.996980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-28 05:11:25.997008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-10-28 05:11:25.997025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.560 [2024-10-28 05:11:25.997254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.560 [2024-10-28 05:11:25.997476] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-10-28 05:11:25.997497] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-10-28 05:11:25.997510] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-10-28 05:11:26.000657] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-10-28 05:11:26.010014] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-10-28 05:11:26.010357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-28 05:11:26.010386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-10-28 05:11:26.010402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.560 [2024-10-28 05:11:26.010631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.560 [2024-10-28 05:11:26.010857] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-10-28 05:11:26.010879] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-10-28 05:11:26.010894] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-10-28 05:11:26.014073] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-10-28 05:11:26.023537] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-10-28 05:11:26.023909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-28 05:11:26.023939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-10-28 05:11:26.023957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.560 [2024-10-28 05:11:26.024186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.560 [2024-10-28 05:11:26.024407] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-10-28 05:11:26.024428] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-10-28 05:11:26.024442] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-10-28 05:11:26.027607] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-10-28 05:11:26.036958] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-10-28 05:11:26.037345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-28 05:11:26.037376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-10-28 05:11:26.037393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.560 [2024-10-28 05:11:26.037621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.560 [2024-10-28 05:11:26.037865] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-10-28 05:11:26.037888] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-10-28 05:11:26.037902] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-10-28 05:11:26.041055] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-10-28 05:11:26.050340] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-10-28 05:11:26.050772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-28 05:11:26.050802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-10-28 05:11:26.050828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.561 [2024-10-28 05:11:26.051070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.561 [2024-10-28 05:11:26.051276] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-10-28 05:11:26.051297] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-10-28 05:11:26.051319] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-10-28 05:11:26.054467] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-10-28 05:11:26.063751] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-10-28 05:11:26.064172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-28 05:11:26.064201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-10-28 05:11:26.064220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.561 [2024-10-28 05:11:26.064450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.561 [2024-10-28 05:11:26.064701] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-10-28 05:11:26.064724] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-10-28 05:11:26.064739] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-10-28 05:11:26.067895] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-10-28 05:11:26.077354] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-10-28 05:11:26.077706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-28 05:11:26.077736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-10-28 05:11:26.077753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.561 [2024-10-28 05:11:26.077967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.561 [2024-10-28 05:11:26.078189] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-10-28 05:11:26.078221] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-10-28 05:11:26.078235] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-10-28 05:11:26.081415] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-10-28 05:11:26.090818] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-10-28 05:11:26.091268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-28 05:11:26.091298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-10-28 05:11:26.091323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.561 [2024-10-28 05:11:26.091568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.561 [2024-10-28 05:11:26.091811] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-10-28 05:11:26.091835] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-10-28 05:11:26.091850] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-10-28 05:11:26.095062] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-10-28 05:11:26.104321] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-10-28 05:11:26.104745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-28 05:11:26.104775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-10-28 05:11:26.104791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.561 [2024-10-28 05:11:26.105022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.561 [2024-10-28 05:11:26.105243] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-10-28 05:11:26.105263] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-10-28 05:11:26.105277] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-10-28 05:11:26.108439] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-10-28 05:11:26.117797] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-10-28 05:11:26.118164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-28 05:11:26.118193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-10-28 05:11:26.118210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.561 [2024-10-28 05:11:26.118440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.561 [2024-10-28 05:11:26.118705] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-10-28 05:11:26.118729] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-10-28 05:11:26.118743] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-10-28 05:11:26.121891] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-10-28 05:11:26.131221] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-10-28 05:11:26.131562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-28 05:11:26.131591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-10-28 05:11:26.131609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.561 [2024-10-28 05:11:26.131854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.561 [2024-10-28 05:11:26.132079] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-10-28 05:11:26.132100] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-10-28 05:11:26.132115] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-10-28 05:11:26.135280] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-10-28 05:11:26.144761] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-10-28 05:11:26.145208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-28 05:11:26.145236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-10-28 05:11:26.145269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.561 [2024-10-28 05:11:26.145511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.561 [2024-10-28 05:11:26.145747] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-10-28 05:11:26.145769] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-10-28 05:11:26.145784] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-10-28 05:11:26.148967] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.821 [2024-10-28 05:11:26.158189] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.821 [2024-10-28 05:11:26.158574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.821 [2024-10-28 05:11:26.158603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.821 [2024-10-28 05:11:26.158621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.821 [2024-10-28 05:11:26.158842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.821 [2024-10-28 05:11:26.159068] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.821 [2024-10-28 05:11:26.159090] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.821 [2024-10-28 05:11:26.159104] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.821 [2024-10-28 05:11:26.162233] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.821 [2024-10-28 05:11:26.171725] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.821 [2024-10-28 05:11:26.172135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.821 [2024-10-28 05:11:26.172164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.821 [2024-10-28 05:11:26.172187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.821 [2024-10-28 05:11:26.172418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.821 [2024-10-28 05:11:26.172666] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.821 [2024-10-28 05:11:26.172690] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.821 [2024-10-28 05:11:26.172709] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.821 [2024-10-28 05:11:26.176095] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.821 [2024-10-28 05:11:26.185192] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.821 [2024-10-28 05:11:26.185522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.821 [2024-10-28 05:11:26.185553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.821 [2024-10-28 05:11:26.185570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.821 [2024-10-28 05:11:26.185793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.821 [2024-10-28 05:11:26.186030] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.821 [2024-10-28 05:11:26.186054] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.821 [2024-10-28 05:11:26.186068] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.821 [2024-10-28 05:11:26.189288] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.821 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:35.821 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:35.821 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:35.821 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:35.821 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.821 [2024-10-28 05:11:26.198819] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.821 [2024-10-28 05:11:26.199304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.821 [2024-10-28 05:11:26.199333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.821 [2024-10-28 05:11:26.199361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.821 [2024-10-28 05:11:26.199602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.821 [2024-10-28 05:11:26.199844] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.821 [2024-10-28 05:11:26.199868] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.821 [2024-10-28 05:11:26.199882] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.821 [2024-10-28 05:11:26.203068] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.821 [2024-10-28 05:11:26.212334] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.821 [2024-10-28 05:11:26.212705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.821 [2024-10-28 05:11:26.212734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.821 [2024-10-28 05:11:26.212751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.821 [2024-10-28 05:11:26.212980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.821 [2024-10-28 05:11:26.213201] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.821 [2024-10-28 05:11:26.213223] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.821 [2024-10-28 05:11:26.213236] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.821 [2024-10-28 05:11:26.216424] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.821 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:35.821 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:35.821 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.822 [2024-10-28 05:11:26.225844] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.822 [2024-10-28 05:11:26.226262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.822 [2024-10-28 05:11:26.226292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.822 [2024-10-28 05:11:26.226310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.822 [2024-10-28 05:11:26.226540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.822 [2024-10-28 05:11:26.226808] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.822 [2024-10-28 05:11:26.226831] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.822 [2024-10-28 05:11:26.226846] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.822 [2024-10-28 05:11:26.227604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:35.822 [2024-10-28 05:11:26.230132] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.822 [2024-10-28 05:11:26.239390] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.822 [2024-10-28 05:11:26.239756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.822 [2024-10-28 05:11:26.239786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.822 [2024-10-28 05:11:26.239803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.822 [2024-10-28 05:11:26.240035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.822 [2024-10-28 05:11:26.240256] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.822 [2024-10-28 05:11:26.240279] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.822 [2024-10-28 05:11:26.240292] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.822 [2024-10-28 05:11:26.243452] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.822 [2024-10-28 05:11:26.252949] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.822 [2024-10-28 05:11:26.253364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.822 [2024-10-28 05:11:26.253395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.822 [2024-10-28 05:11:26.253413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.822 [2024-10-28 05:11:26.253684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.822 [2024-10-28 05:11:26.253897] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.822 [2024-10-28 05:11:26.253947] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.822 [2024-10-28 05:11:26.253962] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.822 [2024-10-28 05:11:26.257177] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.822 [2024-10-28 05:11:26.266434] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.822 [2024-10-28 05:11:26.266921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.822 [2024-10-28 05:11:26.266957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.822 [2024-10-28 05:11:26.266976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.822 [2024-10-28 05:11:26.267226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.822 [2024-10-28 05:11:26.267434] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.822 [2024-10-28 05:11:26.267457] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.822 [2024-10-28 05:11:26.267472] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.822 Malloc0 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.822 [2024-10-28 05:11:26.270819] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.822 [2024-10-28 05:11:26.280002] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.822 [2024-10-28 05:11:26.280366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.822 [2024-10-28 05:11:26.280396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc30580 with addr=10.0.0.2, port=4420 00:35:35.822 [2024-10-28 05:11:26.280413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc30580 is same with the state(6) to be set 00:35:35.822 [2024-10-28 05:11:26.280669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc30580 (9): Bad file descriptor 00:35:35.822 [2024-10-28 05:11:26.280890] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.822 [2024-10-28 05:11:26.280914] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.822 [2024-10-28 05:11:26.280944] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.822 [2024-10-28 05:11:26.284183] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.822 [2024-10-28 05:11:26.289146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.822 05:11:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2477062 00:35:35.822 [2024-10-28 05:11:26.293486] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.822 [2024-10-28 05:11:26.366584] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:37.454 3310.29 IOPS, 12.93 MiB/s [2024-10-28T04:11:28.984Z] 3931.00 IOPS, 15.36 MiB/s [2024-10-28T04:11:29.918Z] 4421.00 IOPS, 17.27 MiB/s [2024-10-28T04:11:30.852Z] 4803.00 IOPS, 18.76 MiB/s [2024-10-28T04:11:31.869Z] 5111.73 IOPS, 19.97 MiB/s [2024-10-28T04:11:32.804Z] 5380.83 IOPS, 21.02 MiB/s [2024-10-28T04:11:33.739Z] 5590.77 IOPS, 21.84 MiB/s [2024-10-28T04:11:34.674Z] 5778.50 IOPS, 22.57 MiB/s 00:35:44.078 Latency(us) 00:35:44.078 [2024-10-28T04:11:34.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:44.078 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:44.078 Verification LBA range: start 0x0 length 0x4000 00:35:44.078 Nvme1n1 : 15.01 5935.39 23.19 10639.52 0.00 7697.08 845.51 18199.71 00:35:44.078 [2024-10-28T04:11:34.674Z] =================================================================================================================== 00:35:44.078 [2024-10-28T04:11:34.674Z] Total : 5935.39 23.19 10639.52 0.00 7697.08 845.51 18199.71 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:44.337 rmmod nvme_tcp 00:35:44.337 rmmod nvme_fabrics 00:35:44.337 rmmod nvme_keyring 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 2477712 ']' 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 2477712 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2477712 ']' 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2477712 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:44.337 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2477712 00:35:44.596 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:44.596 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:44.596 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2477712' 00:35:44.596 killing process with pid 2477712 00:35:44.596 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2477712 00:35:44.596 05:11:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2477712 00:35:44.855 05:11:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:44.855 05:11:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:44.855 05:11:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:44.856 05:11:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:44.856 05:11:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:35:44.856 05:11:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:44.856 05:11:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:35:44.856 05:11:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:44.856 05:11:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:44.856 05:11:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.856 05:11:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:44.856 05:11:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:46.758 05:11:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:46.758 00:35:46.758 real 0m23.263s 00:35:46.758 user 1m2.223s 00:35:46.758 sys 0m4.293s 00:35:46.758 05:11:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:46.758 05:11:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.758 ************************************ 00:35:46.758 END TEST nvmf_bdevperf 00:35:46.758 ************************************ 00:35:46.758 05:11:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:46.758 05:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:46.758 05:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:46.758 05:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.758 ************************************ 00:35:46.758 START TEST nvmf_target_disconnect 00:35:46.758 ************************************ 00:35:46.758 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:46.758 * Looking for test storage... 00:35:46.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:47.018 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:35:47.018 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1689 -- # lcov --version 00:35:47.018 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:35:47.018 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:35:47.018 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:47.018 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:47.018 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:47.018 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:47.018 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:47.018 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:47.018 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:47.018 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:47.018 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:35:47.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.019 --rc genhtml_branch_coverage=1 00:35:47.019 --rc genhtml_function_coverage=1 00:35:47.019 --rc genhtml_legend=1 00:35:47.019 --rc geninfo_all_blocks=1 00:35:47.019 --rc geninfo_unexecuted_blocks=1 00:35:47.019 00:35:47.019 ' 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:35:47.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.019 --rc genhtml_branch_coverage=1 00:35:47.019 --rc genhtml_function_coverage=1 00:35:47.019 --rc genhtml_legend=1 00:35:47.019 --rc geninfo_all_blocks=1 00:35:47.019 --rc geninfo_unexecuted_blocks=1 00:35:47.019 00:35:47.019 ' 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:35:47.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.019 --rc genhtml_branch_coverage=1 00:35:47.019 --rc genhtml_function_coverage=1 00:35:47.019 --rc genhtml_legend=1 00:35:47.019 --rc geninfo_all_blocks=1 00:35:47.019 --rc geninfo_unexecuted_blocks=1 00:35:47.019 00:35:47.019 ' 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:35:47.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.019 --rc genhtml_branch_coverage=1 00:35:47.019 --rc genhtml_function_coverage=1 00:35:47.019 --rc genhtml_legend=1 00:35:47.019 --rc geninfo_all_blocks=1 00:35:47.019 --rc geninfo_unexecuted_blocks=1 00:35:47.019 00:35:47.019 ' 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:47.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:47.019 05:11:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:49.551 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:49.551 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:49.551 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:49.551 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:49.551 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:49.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:49.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:35:49.552 00:35:49.552 --- 10.0.0.2 ping statistics --- 00:35:49.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:49.552 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:49.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:49.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:35:49.552 00:35:49.552 --- 10.0.0.1 ping statistics --- 00:35:49.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:49.552 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:49.552 ************************************ 00:35:49.552 START TEST nvmf_target_disconnect_tc1 00:35:49.552 ************************************ 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:49.552 [2024-10-28 05:11:39.963143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-10-28 05:11:39.963215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22a9310 with addr=10.0.0.2, port=4420 00:35:49.552 [2024-10-28 05:11:39.963250] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:49.552 [2024-10-28 05:11:39.963279] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:49.552 [2024-10-28 05:11:39.963295] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:49.552 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:49.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:49.552 Initializing NVMe Controllers 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:49.552 00:35:49.552 real 0m0.210s 00:35:49.552 user 0m0.047s 00:35:49.552 sys 0m0.063s 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:49.552 ************************************ 00:35:49.552 END TEST nvmf_target_disconnect_tc1 00:35:49.552 ************************************ 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:49.552 05:11:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:49.552 ************************************ 00:35:49.552 START TEST nvmf_target_disconnect_tc2 00:35:49.552 ************************************ 00:35:49.552 05:11:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:35:49.552 05:11:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:49.552 05:11:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:49.552 05:11:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:49.552 05:11:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:49.552 05:11:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:49.552 05:11:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2480822 00:35:49.552 05:11:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:49.552 05:11:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2480822 00:35:49.552 05:11:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2480822 ']' 00:35:49.552 05:11:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:49.552 05:11:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:49.552 05:11:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:49.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:49.552 05:11:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:49.552 05:11:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:49.552 [2024-10-28 05:11:40.086436] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:35:49.552 [2024-10-28 05:11:40.086529] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:49.811 [2024-10-28 05:11:40.227372] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:49.811 [2024-10-28 05:11:40.265965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:49.811 [2024-10-28 05:11:40.317376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:49.811 [2024-10-28 05:11:40.317430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:49.811 [2024-10-28 05:11:40.317457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:49.811 [2024-10-28 05:11:40.317469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:49.811 [2024-10-28 05:11:40.317478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:49.811 [2024-10-28 05:11:40.319146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:49.811 [2024-10-28 05:11:40.319213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:49.811 [2024-10-28 05:11:40.319276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:49.811 [2024-10-28 05:11:40.319279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.745 Malloc0 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.745 [2024-10-28 05:11:41.155372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.745 [2024-10-28 05:11:41.183540] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2480975 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:50.745 05:11:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:52.647 05:11:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2480822 00:35:52.647 05:11:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Write completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Write completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Write completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Write completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Write completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Write completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Write completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Read completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Write completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.647 Write completed with error (sct=0, sc=8) 00:35:52.647 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 [2024-10-28 05:11:43.212120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 [2024-10-28 05:11:43.212477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 [2024-10-28 05:11:43.212771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Write completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 Read completed with error (sct=0, sc=8) 00:35:52.648 starting I/O failed 00:35:52.648 [2024-10-28 05:11:43.213563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:52.648 [2024-10-28 05:11:43.213769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.648 [2024-10-28 05:11:43.213810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.648 qpair failed and we were unable to recover it. 00:35:52.648 [2024-10-28 05:11:43.214000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.648 [2024-10-28 05:11:43.214027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.648 qpair failed and we were unable to recover it. 00:35:52.648 [2024-10-28 05:11:43.214195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.648 [2024-10-28 05:11:43.214221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.648 qpair failed and we were unable to recover it. 00:35:52.648 [2024-10-28 05:11:43.214363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.648 [2024-10-28 05:11:43.214390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.648 qpair failed and we were unable to recover it. 00:35:52.648 [2024-10-28 05:11:43.214640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.648 [2024-10-28 05:11:43.214667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.648 qpair failed and we were unable to recover it. 00:35:52.648 [2024-10-28 05:11:43.214770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.648 [2024-10-28 05:11:43.214796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.648 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.214928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.214954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.215088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.215113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.215281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.215307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.215453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.215501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.215639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.215666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.215778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.215804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.215951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.215977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.216113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.216139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.216288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.216314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.216452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.216477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.216623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.216658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.216771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.216798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.216918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.216944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.217054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.217080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.217219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.217244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.217419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.217463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.217617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.217654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.217778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.217805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.217953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.217980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.218141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.218168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.218318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.218369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.218490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.218517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.218653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.218680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.218846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.218872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.219020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.219046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.219184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.219210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.219353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.219381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.219564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.219590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.219754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.219802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.220012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.220057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.220247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.220274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.220418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.220460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.220592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.220640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.220830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.220857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.221000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.221026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.221138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.221164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.221310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.221337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.221514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.221559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.221732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.221773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.221940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.649 [2024-10-28 05:11:43.221968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.649 qpair failed and we were unable to recover it. 00:35:52.649 [2024-10-28 05:11:43.222255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.222282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.222401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.222429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.222586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.222643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.222763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.222791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.222923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.222950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.223160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.223207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.223394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.223440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.223601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.223646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.223814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.223841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.223955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.223981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.224302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.224328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.224519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.224548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.224689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.224716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.224832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.224860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.225056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.225113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.225372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.225402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.225577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.225603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.225741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.225781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.225900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.225928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.226124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.226154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.226335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.226364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.226516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.226546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.226732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.226771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.226937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.226965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.227153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.227181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.227320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.227347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.227489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.227517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.227689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.227716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.227853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.227880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.227993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.228019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.228155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.228181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.228333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.228360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.228526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.228555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.228711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.228738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.228869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.228895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.229068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.229097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.229317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.229363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.229516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.229545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.229716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.229743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.229862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.650 [2024-10-28 05:11:43.229900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.650 qpair failed and we were unable to recover it. 00:35:52.650 [2024-10-28 05:11:43.230054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.230083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.230276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.230303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.230473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.230500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.230697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.230724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.230865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.230905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.231090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.231118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.231241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.231269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.231383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.231428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.231626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.231666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.231803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.231838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.231991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.232038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.232173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.232216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.232374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.232403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.232570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.232596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.232728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.232755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.232869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.232896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.233110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.233155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.233341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.233367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.233477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.233504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.233612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.233651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.233792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.233818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.233933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.233977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.234128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.234157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.234344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.234373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.234551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.234580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.234735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.234762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.234886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.234915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.235081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.235124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.235247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.235273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.235433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.235463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.235647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.235691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.235845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.235884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.236100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.236149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.236282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.236314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.236480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.236507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.236661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.236699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.236844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.236873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.237016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.237043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.237161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.237187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.237301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.237329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.651 qpair failed and we were unable to recover it. 00:35:52.651 [2024-10-28 05:11:43.237477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.651 [2024-10-28 05:11:43.237504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.652 qpair failed and we were unable to recover it. 00:35:52.652 [2024-10-28 05:11:43.237618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.652 [2024-10-28 05:11:43.237654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.652 qpair failed and we were unable to recover it. 00:35:52.652 [2024-10-28 05:11:43.237796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.652 [2024-10-28 05:11:43.237824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.652 qpair failed and we were unable to recover it. 00:35:52.652 [2024-10-28 05:11:43.237965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.652 [2024-10-28 05:11:43.237992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.652 qpair failed and we were unable to recover it. 00:35:52.652 [2024-10-28 05:11:43.238144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.652 [2024-10-28 05:11:43.238174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.652 qpair failed and we were unable to recover it. 00:35:52.652 [2024-10-28 05:11:43.238335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.652 [2024-10-28 05:11:43.238365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.652 qpair failed and we were unable to recover it. 00:35:52.652 [2024-10-28 05:11:43.238557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.652 [2024-10-28 05:11:43.238590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.652 qpair failed and we were unable to recover it. 00:35:52.652 [2024-10-28 05:11:43.238756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.652 [2024-10-28 05:11:43.238797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.652 qpair failed and we were unable to recover it. 00:35:52.652 [2024-10-28 05:11:43.238942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.652 [2024-10-28 05:11:43.238974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.652 qpair failed and we were unable to recover it. 00:35:52.652 [2024-10-28 05:11:43.239174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.652 [2024-10-28 05:11:43.239201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.652 qpair failed and we were unable to recover it. 00:35:52.940 [2024-10-28 05:11:43.239316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.239342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.239522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.239549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.239690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.239717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.239857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.239884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.240048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.240075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.240200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.240230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.240374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.240404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.240564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.240591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.240723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.240764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.240911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.240939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.241107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.241134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.241248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.241276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.241439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.241470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.241624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.241663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.241824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.241851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.242165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.242220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.242506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.242557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.242700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.242729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.242895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.242926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.243074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.243106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.243323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.243350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.243503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.243535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.243759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.243800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.243933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.243973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.244239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.244292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.244534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.244566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.244698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.244726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.244864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.244891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.245134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.245186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.245416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.245462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.245622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.245666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.245835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.245862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.245986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.246015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.246254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.246305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.246595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.246663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.246809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.246836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.246988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.247017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.247160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.247204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.247373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.247431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.247562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.941 [2024-10-28 05:11:43.247588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.941 qpair failed and we were unable to recover it. 00:35:52.941 [2024-10-28 05:11:43.247787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.247815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.247971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.248001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.248223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.248275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.248445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.248475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.248622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.248657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.248799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.248826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.248983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.249013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.249176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.249205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.249380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.249441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.249575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.249602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.249772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.249799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.249931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.249957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.250085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.250114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.250299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.250328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.250469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.250513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.250681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.250709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.250850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.250877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.251045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.251074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.251298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.251324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.251435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.251462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.251648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.251708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.251858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.251887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.252044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.252074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.252239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.252311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.252563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.252591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.252732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.252760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.252878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.252906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.253112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.253141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.253321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.253351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.253493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.253522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.253734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.253774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.253897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.253925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.254090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.254120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.254260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.254289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.254486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.254515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.254685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.254721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.254836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.254864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.255031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.255057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.942 qpair failed and we were unable to recover it. 00:35:52.942 [2024-10-28 05:11:43.255167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.942 [2024-10-28 05:11:43.255193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.255339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.255367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.255542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.255568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.255684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.255714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.255884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.255930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.256092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.256138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.256303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.256347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.256481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.256509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.256688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.256732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.256907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.256935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.257041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.257068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.257216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.257242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.257373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.257400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.257530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.257558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.257728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.257756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.257899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.257926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.258095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.258121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.258326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.258393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.258531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.258558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.258717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.258762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.258966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.259020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.259198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.259227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.259466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.259529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.259663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.259691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.259844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.259878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.260006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.260036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.260277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.260328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.260497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.260524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.260642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.260669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.260805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.260834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.260999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.261026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.261158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.261185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.261344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.261373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.261540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.261567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.261731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.261758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.261927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.261956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.262120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.262165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.262323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.262352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.262516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.262543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.262680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.262708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.262882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.262910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.943 [2024-10-28 05:11:43.263026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.943 [2024-10-28 05:11:43.263053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.943 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.263230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.263269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.263454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.263481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.263648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.263676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.263816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.263842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.263977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.264004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.264113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.264140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.264364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.264390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.264554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.264580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.264752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.264783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.264975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.265022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.265175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.265204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.265435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.265480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.265663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.265692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.265896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.265923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.266072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.266099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.266240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.266268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.266412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.266439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.266607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.266649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.266804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.266849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.266993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.267038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.267160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.267205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.267345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.267372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.267542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.267569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.267715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.267743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.267889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.267916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.268049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.268076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.268270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.268315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.944 qpair failed and we were unable to recover it. 00:35:52.944 [2024-10-28 05:11:43.268454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.944 [2024-10-28 05:11:43.268481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.945 qpair failed and we were unable to recover it. 00:35:52.945 [2024-10-28 05:11:43.268644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.945 [2024-10-28 05:11:43.268703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.945 qpair failed and we were unable to recover it. 00:35:52.945 [2024-10-28 05:11:43.268895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.945 [2024-10-28 05:11:43.268936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.945 qpair failed and we were unable to recover it. 00:35:52.945 [2024-10-28 05:11:43.269083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.945 [2024-10-28 05:11:43.269111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.945 qpair failed and we were unable to recover it. 00:35:52.945 [2024-10-28 05:11:43.269340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.945 [2024-10-28 05:11:43.269399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.945 qpair failed and we were unable to recover it. 00:35:52.945 [2024-10-28 05:11:43.269563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.945 [2024-10-28 05:11:43.269593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.945 qpair failed and we were unable to recover it. 00:35:52.945 [2024-10-28 05:11:43.269784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.945 [2024-10-28 05:11:43.269811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.945 qpair failed and we were unable to recover it. 00:35:52.945 [2024-10-28 05:11:43.269969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.269999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.270151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.270181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.270314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.270343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.270520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.270566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.270712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.270739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.270884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.270915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.271107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.271151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.271323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.271371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.271538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.271565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.271729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.271757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.271898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.271925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.272088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.272114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.272258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.272285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.272425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.272451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.272616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.272649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.272789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.272838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.273020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.273049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.273281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.273336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.273542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.273568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.273710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.273737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.273878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.946 [2024-10-28 05:11:43.273905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.946 qpair failed and we were unable to recover it. 00:35:52.946 [2024-10-28 05:11:43.274041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.274067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.274258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.274322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.274459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.274502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.274641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.274671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.274835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.274865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.275016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.275043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.275152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.275195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.275341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.275371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.275556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.275585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.275761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.275790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.275898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.275925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.276117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.276147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.276342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.276380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.276563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.276592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.276759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.276787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.276925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.276958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.277107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.277136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.277312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.277341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.277579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.277608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.277766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.277793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.277948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.277978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.278190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.278217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.278423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.278453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.278583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.278610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.278766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.278793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.278899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.278948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.279072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.279102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.279256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.279287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.279441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.279471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.279648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.279692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.279834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.279861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.280010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.280036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.280200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.280230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.280380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.280409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.280578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.280604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.280744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.280771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.280910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.280945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.281061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.281104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.281259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.281288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.281432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.281462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.281646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.947 [2024-10-28 05:11:43.281691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.947 qpair failed and we were unable to recover it. 00:35:52.947 [2024-10-28 05:11:43.281810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.281836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.281981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.282014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.282194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.282224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.282384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.282413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.282590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.282649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.282811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.282840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.283000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.283047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.283203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.283247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.283374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.283401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.283564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.283591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.283767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.283795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.283967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.283993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.284149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.284178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.284391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.284421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.284598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.284645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.284828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.284855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.285086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.285144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.285326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.285355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.285482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.285511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.285688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.285715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.285882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.285908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.286060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.286104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.286324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.286377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.286524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.286550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.286703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.286731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.286874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.286903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.287089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.287119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.287272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.287301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.287467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.287496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.287651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.287695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.287837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.287864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.288015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.288042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.288226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.288256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.288435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.288465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.288584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.288611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.288769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.288796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.288931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.288964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.289120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.289147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.289263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.289290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.289496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.289523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.289668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.289695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.289833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.289859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.290004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.290031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.290255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.290315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.948 qpair failed and we were unable to recover it. 00:35:52.948 [2024-10-28 05:11:43.290472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.948 [2024-10-28 05:11:43.290503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.290658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.290702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.290819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.290846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.291020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.291062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.291181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.291214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.291388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.291418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.291596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.291643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.291806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.291833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.292020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.292082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.292231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.292267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.292405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.292434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.292587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.292646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.292819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.292847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.292992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.293036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.293179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.293206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.293381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.293408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.293521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.293547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.293702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.293730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.293869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.293896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.294073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.294099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.294219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.294245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.294381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.294408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.294545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.294572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.294753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.294783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.294929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.294956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.295089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.295117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.295258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.295285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.295450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.295476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.295616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.295659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.295793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.295821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.295958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.295985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.296150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.296181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.296348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.296375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.949 [2024-10-28 05:11:43.296515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.949 [2024-10-28 05:11:43.296543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.949 qpair failed and we were unable to recover it. 00:35:52.951 [2024-10-28 05:11:43.296692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.951 [2024-10-28 05:11:43.296720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.951 qpair failed and we were unable to recover it. 00:35:52.951 [2024-10-28 05:11:43.296864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.296891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.297049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.297076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.297190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.297217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.297343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.297372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.297521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.297550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.297699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.297726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.297856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.297886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.298133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.298186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.298364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.298393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.298580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.298607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.298768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.298809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.298986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.299015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.299148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.299178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.299343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.299373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.299501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.299528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.299646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.299675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.299810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.299838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.299989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.300015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.300180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.300222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.300370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.300399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.300550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.300576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.300811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.300839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.300984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.301016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.301180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.301207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.301347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.301374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.301516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.301559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.301701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.301728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.301849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.301879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.302073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.302104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.302280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.302331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.302475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.302502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.302646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.302674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.302869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.302914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.303078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.303123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.303301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.303329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.303492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.303519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.303668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.952 [2024-10-28 05:11:43.303696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.952 qpair failed and we were unable to recover it. 00:35:52.952 [2024-10-28 05:11:43.303842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.303870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.953 [2024-10-28 05:11:43.304036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.304062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.953 [2024-10-28 05:11:43.304196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.304223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.953 [2024-10-28 05:11:43.304331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.304358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.953 [2024-10-28 05:11:43.304535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.304562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.953 [2024-10-28 05:11:43.304695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.304723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.953 [2024-10-28 05:11:43.304881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.304925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.953 [2024-10-28 05:11:43.305115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.305160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.953 [2024-10-28 05:11:43.305275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.305302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.953 [2024-10-28 05:11:43.305472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.305499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.953 [2024-10-28 05:11:43.305613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.305662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.953 [2024-10-28 05:11:43.305807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.305834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.953 [2024-10-28 05:11:43.305975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.306004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.953 [2024-10-28 05:11:43.306159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.306199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.953 [2024-10-28 05:11:43.306367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.306395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.953 [2024-10-28 05:11:43.306546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.306586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.953 [2024-10-28 05:11:43.306793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.953 [2024-10-28 05:11:43.306826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.953 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.306970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.307002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.307188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.307218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.307422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.307477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.307648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.307697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.307833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.307860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.308158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.308209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.308449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.308502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.308695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.308722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.308864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.308891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.309034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.309068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.309227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.309315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.309468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.309498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.309689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.309716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.309828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.309856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.310031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.310077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.310238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.310282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.310454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.310481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.310648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.310692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.310881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.310927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.311080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.311124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.311258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.311290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.311412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.311455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.311601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.311654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.311820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.954 [2024-10-28 05:11:43.311849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.954 qpair failed and we were unable to recover it. 00:35:52.954 [2024-10-28 05:11:43.312005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.312036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.312257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.312328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.312487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.312517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.312690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.312718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.312876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.312906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.313104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.313131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.313313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.313343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.313470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.313501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.313679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.313709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.313891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.313942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.314110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.314138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.314364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.314417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.314586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.314613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.314764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.314793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.314923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.314965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.315143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.315173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.315405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.315432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.315578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.315606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.315746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.315774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.315938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.315965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.316131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.316158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.316333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.316387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.316545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.316575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.316784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.316813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.316977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.955 [2024-10-28 05:11:43.317028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.955 qpair failed and we were unable to recover it. 00:35:52.955 [2024-10-28 05:11:43.317231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.317283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.317531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.317584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.317729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.317758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.317945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.317998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.318168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.318225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.318354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.318398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.318549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.318577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.319120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.319151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.319384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.319413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.319559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.319587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.319757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.319785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.319907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.319942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.320094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.320139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.320248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.320275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.320404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.320433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.320576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.320603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.320792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.320851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.321005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.321048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.321173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.321203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.321360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.321390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.321523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.321554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.321720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.321748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.321904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.321935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.322065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.322095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.322216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.322245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.322444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.322490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.322656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.322700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.322854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.322903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.323053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.323099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.323229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.323274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.956 [2024-10-28 05:11:43.323420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.956 [2024-10-28 05:11:43.323461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.956 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.323626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.323692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.323856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.323888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.324019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.324049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.324327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.324380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.324533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.324562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.324792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.324839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.325003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.325047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.325232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.325261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.325419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.325446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.325595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.325643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.325768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.325796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.326023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.326052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.326206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.326235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.326362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.326391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.326539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.326568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.326727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.326754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.326897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.326947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.327105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.327154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.327395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.327423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.327564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.327590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.327742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.327770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.327877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.327904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.328069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.328099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-10-28 05:11:43.328269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.957 [2024-10-28 05:11:43.328314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.328500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.328532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.328709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.328737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.328877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.328920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.329101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.329130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.329348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.329374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.329509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.329535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.329694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.329721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.329851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.329878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.330034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.330061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.330169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.330195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.330451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.330504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.330704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.330732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.330848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.330875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.331115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.331168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.331433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.331484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.331648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.331674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.331809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.331836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.331968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.331998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.332199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.332254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.332385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.332429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.332595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.332621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-10-28 05:11:43.332770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.958 [2024-10-28 05:11:43.332797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.332907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.332944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.333129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.333159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.333396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.333426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.333579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.333606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.333774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.333801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.333981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.334008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.334202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.334254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.334420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.334446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.334568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.334607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.334762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.334803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.334980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.335016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.335247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.335301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.335530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.335557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.335725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.335754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.335867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.335895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.336055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.336086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.336230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.336258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.336493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.336525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.336705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.336733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.336870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.336896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.337036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.337062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.337196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.337226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.337376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.337405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.337585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.337648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.337780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.337811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.337947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.337975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.338178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.338205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.338353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.338381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.338532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.959 [2024-10-28 05:11:43.338571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-10-28 05:11:43.338737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.338766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.338904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.338950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.339078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.339108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.339340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.339370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.339534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.339562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.339682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.339710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.339849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.339877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.340128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.340177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.340391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.340447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.340598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.340644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.340828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.340855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.341005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.341031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.341167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.341197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.341406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.341436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.341601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.341641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.341784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.341811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.341977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.342011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.342233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.342259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.342419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.342448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.342622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.342676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.342824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.342851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.342985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.343021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.343180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.343254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.343401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.343428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.343573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.343599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.343753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.343793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.343974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.344003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.344173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.344202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.344342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.344370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.344573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.344645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.344767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.344798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.344950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.344993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.345211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.345241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.345388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.345417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.960 [2024-10-28 05:11:43.345601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.960 [2024-10-28 05:11:43.345648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.960 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.345786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.345812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.345955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.345981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.346113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.346143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.346287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.346313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.346424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.346450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.346560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.346587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.346766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.346794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.346904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.346953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.347171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.347201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.347350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.347379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.347498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.347527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.347682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.347710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.347889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.347940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.348115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.348144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.348320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.348378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.348549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.348594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.348716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.348744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.348857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.348884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.349087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.349117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.349278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.349322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.349460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.349487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.349638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.349667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.349834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.349861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.350027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.350098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.350406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.350465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.350627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.350668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.350828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.350855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.350992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.351023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.351177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.351206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.351361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.351387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.351524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.351564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.351715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.351745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.351935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.351984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.352165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.352193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.352360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.352404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.352548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.352575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.352723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.352768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.352935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.352978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.353136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.353182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.353325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.353354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.353498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.353525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.353676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.353704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.961 [2024-10-28 05:11:43.353869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.961 [2024-10-28 05:11:43.353896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.961 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.354060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.354087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.354243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.354287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.354431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.354458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.354621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.354662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.354820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.354865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.355042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.355069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.355261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.355304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.355469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.355496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.355631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.355681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.355854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.355883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.356027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.356057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.356237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.356266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.356445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.356474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.356627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.356666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.356829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.356855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.357007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.357036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.357156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.357197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.357361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.357390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.357515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.357550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.357716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.357743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.357906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.357942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.358110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.358136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.358359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.358408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.358594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.358621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.358771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.358798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.358947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.358977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.359188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.359249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.359399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.359428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.359575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.359620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.359834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.359874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.360020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.360049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.962 [2024-10-28 05:11:43.360238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.962 [2024-10-28 05:11:43.360283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.962 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.360429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.360475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.360608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.360660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.360776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.360804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.360943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.360969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.361134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.361161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.361269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.361297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.361434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.361461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.361604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.361638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.361776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.361803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.361940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.361967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.362132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.362177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.362338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.362383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.362551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.362577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.362741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.362790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.362923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.362968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.363114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.363158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.363347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.363376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.363510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.363536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.363704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.363750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.363913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.363945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.364136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.364163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.364295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.364322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.364488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.364532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.364702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.364729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.364895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.364927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.365092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.365135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.365431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.365484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.365688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.365717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.365859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.365905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.366074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.366103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.366322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.366385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.366525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.366552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.366661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.366690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.366867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.366914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.367072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.367118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.367251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.367278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.963 [2024-10-28 05:11:43.367424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.963 [2024-10-28 05:11:43.367453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.963 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.367591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.367618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.367746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.367775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.367933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.367975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.368156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.368186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.368441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.368506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.368670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.368702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.368820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.368850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.369043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.369070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.369212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.369239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.369363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.369419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.369543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.369574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.369741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.369771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.369945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.369973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.370093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.370120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.370226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.370254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.370411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.370452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.370626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.370667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.370835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.370863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.371091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.371141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.371371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.371420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.371601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.371631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.371794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.371821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.371965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.371991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.372125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.372155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.372337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.372366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.372516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.372544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.372731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.372772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.372940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.372971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.964 [2024-10-28 05:11:43.373142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.964 [2024-10-28 05:11:43.373168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.964 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.373305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.373331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.373493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.373523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.373688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.373714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.373880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.373906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.374049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.374091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.374244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.374273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.374426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.374457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.374594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.374622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.374771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.374799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.374945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.374972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.375167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.375218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.375394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.375446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.375606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.375643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.375800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.375837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.376027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.376056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.376283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.376313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.376495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.376524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.376746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.376774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.376942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.376971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.377093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.377137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.377369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.377418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.377595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.377624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.965 qpair failed and we were unable to recover it. 00:35:52.965 [2024-10-28 05:11:43.377794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.965 [2024-10-28 05:11:43.377821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.377997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.378036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.378183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.378228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.378495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.378545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.378711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.378739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.378903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.378935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.379078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.379105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.379281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.379308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.379476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.379506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.379645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.379691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.379839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.379865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.380028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.380071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.380286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.380317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.380449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.380479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.380639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.380692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.380801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.380828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.380947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.380974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.381128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.381154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.381326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.381355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.381517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.381544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.381663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.381690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.381826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.381853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.382002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.382032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.382255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.382281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.382409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.382436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.382546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.382573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.382679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.382706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.382870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.382896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.383047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.383077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.383276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.383330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.383494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.383521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.383632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.383665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.383809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.383835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.966 [2024-10-28 05:11:43.383947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.966 [2024-10-28 05:11:43.383974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.966 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.384137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.384164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.384344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.384370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.384504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.384530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.384682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.384722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.384843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.384872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.385049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.385077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.385233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.385264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.385429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.385456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.385591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.385618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.385760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.385788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.385932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.385959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.386078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.386105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.386251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.386277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.386418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.386451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.386642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.386669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.386803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.386830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.386969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.386995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.387140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.387166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.387308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.387335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.387476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.387503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.387648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.387675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.387814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.387841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.388019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.388048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.388231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.388257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.388372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.388399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.388565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.388592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.388733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.388760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.388897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.388924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.389104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.389133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.389292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.389319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.389428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.389455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.389595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.389621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.389771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.389797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.389902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.389929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.390124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.967 [2024-10-28 05:11:43.390153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.967 qpair failed and we were unable to recover it. 00:35:52.967 [2024-10-28 05:11:43.390303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.390330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.390498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.390525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.390649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.390677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.390814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.390840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.390999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.391043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.391204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.391235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.391420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.391447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.391620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.391652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.391816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.391851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.391988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.392026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.392192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.392234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.392352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.392382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.392514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.392540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.392712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.392740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.392875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.392901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.393043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.393070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.393247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.393277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.393430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.393460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.393652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.393679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.393819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.393847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.394035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.394065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.394227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.394253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.394365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.394392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.394578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.394608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.394757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.394784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.394937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.394964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.395102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.395129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.395300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.395326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.395461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.395488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.395626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.395666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.395772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.395799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.395982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.396021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.396230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.396261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.396487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.396514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.396685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.396712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.396854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.396881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.397022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.397048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.397187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.397213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.397406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.397435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.397623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.397657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.968 qpair failed and we were unable to recover it. 00:35:52.968 [2024-10-28 05:11:43.397874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.968 [2024-10-28 05:11:43.397900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.398067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.398097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.398264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.398290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.398453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.398499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.398653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.398703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.398820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.398849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.398994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.399020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.399156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.399183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.399297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.399323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.399478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.399525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.399675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.399720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.399885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.399919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.400103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.400133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.400314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.400343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.400478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.400505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.400620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.400654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.400821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.400848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.400989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.401015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.401195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.401237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.401364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.401395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.401558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.401584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.401707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.401736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.401875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.401902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.402078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.402104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.402284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.402325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.402500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.402529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.402664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.402700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.402818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.402845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.403007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.403034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.403173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.403200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.403365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.403392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.403493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.403523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.403666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.403693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.403808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.403835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.403961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.404001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.404135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.404163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.404327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.404354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.404493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.404519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.404690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.404717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.404861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.404888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.405049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.405075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.405241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.405267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.969 [2024-10-28 05:11:43.405429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.969 [2024-10-28 05:11:43.405458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.969 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.405646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.405676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.405808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.405836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.406007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.406034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.406149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.406175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.406340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.406367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.406549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.406578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.406739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.406766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.406932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.406958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.407112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.407141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.407310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.407336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.407472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.407498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.407666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.407725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.407872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.407901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.408020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.408049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.408193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.408220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.408352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.408384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.408529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.408556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.408708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.408737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.408873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.408899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.409089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.409116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.409255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.409282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.409416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.409442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.409556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.409582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.409726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.409754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.409894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.409945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.410131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.410157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.410301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.410329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.410462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.410489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.410643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.410670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.410814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.410843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.410984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.411010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.411129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.411155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.411295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.411322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.411459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.411485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.411627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.411658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.411774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.411801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.411946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.970 [2024-10-28 05:11:43.411972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.970 qpair failed and we were unable to recover it. 00:35:52.970 [2024-10-28 05:11:43.412082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.412118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.412249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.412293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.412471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.412499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.412670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.412697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.412817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.412843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.412991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.413023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.413167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.413193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.413332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.413358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.413528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.413557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.413696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.413723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.413864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.413890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.414033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.414062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.414223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.414249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.414429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.414457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.414647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.414674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.414840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.414866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.415012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.415056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.415223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.415252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.415415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.415442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.415601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.415647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.415813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.415839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.416013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.416038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.416193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.416223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.416368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.416394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.416503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.416529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.416679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.416720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.416866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.416894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.417107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.417134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.417276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.417302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.417444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.417470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.417615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.417653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.417766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.417792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.417979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.418015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.418173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.418199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.418407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.971 [2024-10-28 05:11:43.418462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.971 qpair failed and we were unable to recover it. 00:35:52.971 [2024-10-28 05:11:43.418620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.418664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.418843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.418870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.419052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.419082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.419255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.419284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.419444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.419470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.419583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.419612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.419767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.419794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.419935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.419961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.420094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.420120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.420285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.420330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.420489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.420515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.420659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.420687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.420852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.420879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.421056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.421082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.421191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.421217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.421382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.421408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.421525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.421551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.421721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.421750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.421861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.421888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.422055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.422081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.422262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.422312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.422442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.422471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.422654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.422680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.422795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.422823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.422976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.423010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.423165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.423193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.423336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.423379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.423555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.423584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.423763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.423789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.423929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.423955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.424096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.424138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.424329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.424355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.424515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.424547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.424716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.424744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.424883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.424910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.425025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.425052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.972 [2024-10-28 05:11:43.425216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.972 [2024-10-28 05:11:43.425258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.972 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.425423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.425449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.425593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.425620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.425793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.425819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.425983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.426009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.426165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.426194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.426345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.426374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.426558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.426585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.426693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.426721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.426837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.426864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.427008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.427035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.427178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.427221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.427377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.427405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.427566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.427592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.427740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.427768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.427952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.427986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.428150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.428177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.428310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.428337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.428472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.428512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.428646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.428674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.428807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.428844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.429003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.429032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.429193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.429221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.429408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.429438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.429561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.429590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.429761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.429788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.429929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.429956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.430084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.430113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.430297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.430324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.430433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.430460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.430595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.430622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.430797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.430824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.430973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.431016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.431165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.431194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.431335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.431362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.431530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.431556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.431739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.431766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.431910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.431937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.432103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.432133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.973 qpair failed and we were unable to recover it. 00:35:52.973 [2024-10-28 05:11:43.432278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.973 [2024-10-28 05:11:43.432321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.432454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.432481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.432647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.432707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.432859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.432894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.433035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.433061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.433226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.433253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.433425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.433454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.433603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.433628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.433778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.433805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.433940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.433969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.434106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.434132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.434280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.434309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.434483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.434510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.434646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.434673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.434837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.434864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.435028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.435073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.435256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.435283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.435433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.435460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.435601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.435628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.435774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.435801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.435964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.436028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.436177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.436206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.436357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.436383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.436490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.436527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.436728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.436755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.436925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.436951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.437113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.437143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.437323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.437352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.437512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.437539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.437682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.437709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.437850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.437880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.438030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.438057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.974 [2024-10-28 05:11:43.438278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.974 [2024-10-28 05:11:43.438329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.974 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.438497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.438523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.438665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.438692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.438864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.438891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.439073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.439099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.439235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.439262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.439397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.439423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.439563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.439590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.439760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.439801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.439947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.439975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.440121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.440148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.440312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.440358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.440532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.440559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.440694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.440723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.440861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.440893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.441025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.441068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.441200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.441244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.441403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.441432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.441563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.441592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.441770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.441797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.441932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.441981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.442142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.442185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.442337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.442382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.442489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.442518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.442704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.442749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.442882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.442933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.443100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.443127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.443346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.443417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.443557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.443585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.975 qpair failed and we were unable to recover it. 00:35:52.975 [2024-10-28 05:11:43.443766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.975 [2024-10-28 05:11:43.443811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.443946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.443977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.444123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.444153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.444413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.444465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.444622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.444672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.444833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.444861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.445126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.445177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.445436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.445489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.445626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.445665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.445824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.445870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.446034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.446064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.446268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.446312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.446454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.446481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.446621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.446654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.446784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.446830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.447021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.447066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.447224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.447277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.447438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.447465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.447579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.447607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.447745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.447775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.447925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.447971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.448142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.448188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.448362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.448389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.448540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.448570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.448745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.448772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.976 qpair failed and we were unable to recover it. 00:35:52.976 [2024-10-28 05:11:43.448941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.976 [2024-10-28 05:11:43.448981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.449132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.449162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.449341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.449371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.449520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.449549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.449686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.449713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.449852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.449879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.450072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.450101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.450270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.450297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.450486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.450526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.450677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.450707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.450866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.450893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.451054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.451084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.451210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.451240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.451413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.451442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.451592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.451619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.451747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.451774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.451885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.451928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.452132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.452162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.452307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.452336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.452512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.452541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.452674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.452702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.452836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.452862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.452998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.453025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.453193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.453254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.453436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.453466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.453649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.977 [2024-10-28 05:11:43.453694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.977 qpair failed and we were unable to recover it. 00:35:52.977 [2024-10-28 05:11:43.453834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.453861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.453983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.454010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.454149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.454192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.454380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.454409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.454565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.454594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.454720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.454747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.454887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.454914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.455051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.455080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.455233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.455262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.455427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.455457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.455641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.455687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.455825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.455852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.455984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.456011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.456198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.456228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.456393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.456453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.456607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.456656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.456785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.456812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.456913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.456939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.457171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.457214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.457392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.457421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.457561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.457590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.457787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.457814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.457977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.458020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.458143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.458173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.458351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.458381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.458532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.458561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.458738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.458770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.458941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.458984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.459121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.459165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.459342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.978 [2024-10-28 05:11:43.459371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.978 qpair failed and we were unable to recover it. 00:35:52.978 [2024-10-28 05:11:43.459490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.459520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.459684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.459711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.459826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.459853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.459970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.459997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.460122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.460152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.460308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.460337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.460511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.460540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.460697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.460724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.460864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.460892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.461035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.461062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.461239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.461268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.461400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.461429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.461588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.461618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.461784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.461811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.461948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.461974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.462162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.462192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.462305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.462334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.462462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.462491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.462689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.462717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.462854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.462882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.463028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.463081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.463214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.463244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.463406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.463436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.463619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.463660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.463818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.463845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.464003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.464032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.464202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.464233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.464380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.464410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.464574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.464603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.464772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.464800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.979 qpair failed and we were unable to recover it. 00:35:52.979 [2024-10-28 05:11:43.464912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.979 [2024-10-28 05:11:43.464944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.465081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.465132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.465419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.465479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.465647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.465692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.465826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.465852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.465977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.466025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.466191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.466225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.466381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.466410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.466565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.466592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.466764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.466791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.466947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.466980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.467128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.467160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.467324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.467354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.467529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.467559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.467723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.467751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.467904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.467959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.468226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.468278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.468455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.468487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.468671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.468717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.468871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.468898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.469082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.469114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.469272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.469330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.469509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.469539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.469703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.980 [2024-10-28 05:11:43.469731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.980 qpair failed and we were unable to recover it. 00:35:52.980 [2024-10-28 05:11:43.469872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.469898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.470055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.470098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.470253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.470283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.470447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.470479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.470663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.470690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.470832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.470861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.470983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.471010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.471203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.471232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.471371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.471399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.471515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.471542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.471708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.471736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.471898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.471939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.472137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.472164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.472328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.472358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.472536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.472564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.472720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.472748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.472959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.472985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.473098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.473143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.473327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.473359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.473510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.473543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.473735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.473763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.473915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.473945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.474070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.474100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.474214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.474252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.474423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.474452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.474615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.474666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.474795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.474821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.474987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.475013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.475138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.475166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.475317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.475362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.475515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.475567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.475704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.475732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.475939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.475966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.476139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.476167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.476348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.476379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.476509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.476538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.476700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.981 [2024-10-28 05:11:43.476728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.981 qpair failed and we were unable to recover it. 00:35:52.981 [2024-10-28 05:11:43.476847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.476891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.477066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.477092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.477261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.477305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.477463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.477491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.477598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.477627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.477814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.477858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.478014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.478044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.478206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.478233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.478402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.478429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.478595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.478663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.478817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.478847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.479004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.479031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.479141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.479171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.479339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.479366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.479538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.479568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.479731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.479759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.479878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.479904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.480047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.480074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.480233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.480263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.480429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.480456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.480605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.480632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.480790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.480817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.480971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.481000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.481129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.481156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.481341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.481386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.481529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.481559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.481725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.481756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.481945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.481976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.482215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.482267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.482445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.482476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.482627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.482669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.482810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.482838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.483006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.483046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.483204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.483233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.483391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.483422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.483584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.483611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.483735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.483780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.483927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.483960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.982 [2024-10-28 05:11:43.484129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.982 [2024-10-28 05:11:43.484159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.982 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.484293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.484320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.484431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.484458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.484645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.484674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.484839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.484871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.485041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.485069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.485239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.485265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.485405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.485432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.485574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.485603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.485775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.485803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.485948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.485974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.486167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.486200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.486372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.486402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.486535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.486561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.486697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.486724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.486882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.486909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.487055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.487094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.487287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.487313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.487479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.487506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.487671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.487713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.487834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.487864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.488031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.488064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.488215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.488242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.488356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.488383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.488574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.488604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.488792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.488819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.488983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.489053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.489284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.489310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.489419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.489445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.489589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.489626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.489811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.489841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.490018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.490049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.490237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.490267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.490422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.490450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.490601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.490669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.490872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.490900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.491028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.491081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.491235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.491262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.491408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.491435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.491600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.491655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.491804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.983 [2024-10-28 05:11:43.491831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.983 qpair failed and we were unable to recover it. 00:35:52.983 [2024-10-28 05:11:43.491975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.492002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.492127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.492154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.492294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.492321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.492446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.492472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.492606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.492640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.492800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.492843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.493020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.493046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.493181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.493208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.493352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.493379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.493544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.493574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.493751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.493782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.493899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.493933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.494070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.494099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.494241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.494285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.494475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.494505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.494623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.494659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.494825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.494852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.494994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.495021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.495154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.495181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.495321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.495350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.495514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.495541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.495683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.495711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.495884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.495926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.496119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.496146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.496251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.496277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.496426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.496459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.496600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.496655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.496812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.496846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.496994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.497021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.497143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.497170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.497311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.497339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.497531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.497561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.497760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.497788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.497955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.498008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.498156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.498183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.498325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.498357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.498548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.498574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.498720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.498746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.498877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.498912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.499028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.499055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.499229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.499259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.499408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.499435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.499570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.499597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.499813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.499847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.499989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.500020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.500163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.500190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.500334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.500364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.500522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.500552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.984 [2024-10-28 05:11:43.500716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.984 [2024-10-28 05:11:43.500743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.984 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.500857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.500884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.500996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.501023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.501168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.501196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.501312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.501338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.501503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.501529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.501707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.501740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.501863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.501893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.502036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.502063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.502218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.502245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.502397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.502426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.502548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.502579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.502754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.502782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.502895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.502922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.503096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.503123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.503291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.503321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.503477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.503505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.503681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.503708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.503850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.503877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.503988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.504026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.504176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.504203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.504335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.504364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.504521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.504553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.504721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.504764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:52.985 qpair failed and we were unable to recover it. 00:35:52.985 [2024-10-28 05:11:43.504899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.985 [2024-10-28 05:11:43.504930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.272 qpair failed and we were unable to recover it. 00:35:53.272 [2024-10-28 05:11:43.505064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.272 [2024-10-28 05:11:43.505091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.272 qpair failed and we were unable to recover it. 00:35:53.272 [2024-10-28 05:11:43.505230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.272 [2024-10-28 05:11:43.505258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.272 qpair failed and we were unable to recover it. 00:35:53.272 [2024-10-28 05:11:43.505381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.272 [2024-10-28 05:11:43.505408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.272 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.505509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.505537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.505666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.505694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.505842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.505869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.506035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.506062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.506210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.506239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.506364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.506391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.506546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.506572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.506730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.506757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.506881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.506909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.507053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.507079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.507254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.507281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.507427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.507455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.507575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.507603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.507756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.507784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.507925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.507952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.508093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.508119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.508313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.508340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.508537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.508567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.508693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.508723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.508846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.508875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.509069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.509096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.509282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.509344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.509536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.509563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.509705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.509732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.509865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.509891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.510091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.510121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.510276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.510306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.510466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.510493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.510667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.510694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.510883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.510913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.511089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.511119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.511270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.511302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.511499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.511527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.511645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.511675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.511793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.511819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.512003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.512029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.512155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.512182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.512327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.512354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.512527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.273 [2024-10-28 05:11:43.512562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.273 qpair failed and we were unable to recover it. 00:35:53.273 [2024-10-28 05:11:43.512717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.512748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.512909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.512947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.513084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.513114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.513310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.513341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.513507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.513538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.513703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.513731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.513837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.513863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.514043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.514080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.514221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.514248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.514401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.514430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.514578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.514622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.514841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.514868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.515040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.515086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.515251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.515278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.515389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.515431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.515618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.515656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.515832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.515862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.516053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.516080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.516210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.516274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.516395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.516425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.516547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.516577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.516720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.516750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.516916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.516962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.517123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.517154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.517318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.517349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.517502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.517529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.517709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.517739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.517919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.517957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.518148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.518189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.518384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.518413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.518597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.518644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.518808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.518836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.519007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.519051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.519248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.519277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.519437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.519467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.519631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.519663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.519830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.519857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.520027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.520054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.520286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.520343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.520523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.520553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.520690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.520720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.520881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.274 [2024-10-28 05:11:43.520913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.274 qpair failed and we were unable to recover it. 00:35:53.274 [2024-10-28 05:11:43.521050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.521097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.521267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.521295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.521436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.521463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.521602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.521629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.521823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.521853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.521999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.522029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.522157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.522187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.522376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.522404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.522524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.522558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.522712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.522741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.522932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.522963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.523161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.523187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.523412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.523471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.523627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.523675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.523805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.523835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.523999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.524027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.524262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.524320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.524503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.524533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.524683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.524726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.524868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.524895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.525010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.525037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.525149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.525176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.525295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.525322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.525434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.525461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.525646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.525677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.525831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.525873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.526048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.526074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.526237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.526264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.526420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.526449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.526614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.526651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.526831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.526859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.527010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.527037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.527202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.527246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.527370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.527401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.527567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.527597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.527774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.527806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.527965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.527995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.528176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.528206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.528384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.528423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.528586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.528613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.528819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.528849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.529045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.529072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.275 [2024-10-28 05:11:43.529208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.275 [2024-10-28 05:11:43.529235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.275 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.529386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.529413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.529535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.529580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.529755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.529784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.529905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.529942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.530074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.530101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.530209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.530236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.530391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.530418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.530546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.530573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.530707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.530735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.530876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.530903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.531078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.531104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.531241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.531268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.531412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.531439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.531578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.531606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.531754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.531782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.531940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.531970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.532108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.532134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.532272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.532299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.532474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.532504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.532619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.532665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.532809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.532837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.532982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.533009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.533240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.533268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.533372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.533402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.533553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.533582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.533731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.533776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.533933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.533963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.534121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.534152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.534341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.534369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.534526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.534556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.534692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.534722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.534879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.534909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.535068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.535095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.535215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.535259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.535404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.535430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.535539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.535566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.535683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.535711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.535855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.535899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.536077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.536106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.536254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.536297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.536435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.536463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.536598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.536626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.536845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.536873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.276 qpair failed and we were unable to recover it. 00:35:53.276 [2024-10-28 05:11:43.537041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.276 [2024-10-28 05:11:43.537085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.537287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.537314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.537432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.537459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.537595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.537622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.537820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.537847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.537987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.538014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.538185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.538215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.538403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.538430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.538572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.538599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.538816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.538844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.539108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.539162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.539327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.539358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.539515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.539545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.539686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.539713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.539829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.539856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.540030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.540060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.540227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.540257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.540414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.540445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.540632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.540668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.540853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.540883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.541059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.541088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.541225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.541251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.541392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.541419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.541566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.541609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.541809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.541837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.541996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.542023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.542134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.542160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.542333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.542375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.542528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.542558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.542703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.542731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.542852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.542878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.543021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.543049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.543230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.543256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.543421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.543447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.543583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.543610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.543794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.543823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.544007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.544036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.544197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.544224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.544387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.544457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.277 [2024-10-28 05:11:43.544599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.277 [2024-10-28 05:11:43.544628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.277 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.544767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.544798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.544966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.545003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.545171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.545200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.545364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.545393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.545575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.545609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.545810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.545839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.545944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.545998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.546153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.546183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.546332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.546361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.546521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.546548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.546692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.546720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.546876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.546905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.547046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.547076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.547233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.547260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.547402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.547429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.547605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.547639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.547823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.547863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.548030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.548057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.548224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.548251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.548395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.548422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.548568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.548598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.548741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.548768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.548900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.548930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.549105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.549134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.549303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.549330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.549445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.549482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.549621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.549653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.549765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.549792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.549993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.550019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.550135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.550162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.550328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.550371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.550501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.550536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.550715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.550747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.550895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.550934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.551037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.551064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.551253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.551282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.551446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.551480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.551626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.551662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.551774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.551801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.551976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.552010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.552162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.552192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.552353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.552380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.552490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.552517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.552632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.552666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.552771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.552798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.552946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.552973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.553135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.278 [2024-10-28 05:11:43.553164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.278 qpair failed and we were unable to recover it. 00:35:53.278 [2024-10-28 05:11:43.553315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.553346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.553515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.553545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.553706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.553734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.553901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.553934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.554077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.554103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.554247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.554273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.554386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.554413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.554607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.554659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.554821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.554852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.555017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.555047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.555202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.555228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.555385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.555415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.555594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.555621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.555800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.555829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.555939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.555966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.556113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.556140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.556304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.556334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.556519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.556546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.556687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.556715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.556856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.556882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.557023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.557050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.557206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.557236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.557399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.557437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.557596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.557627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.557764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.557794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.557968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.557996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.558147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.558174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.558370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.558437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.558597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.558627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.558755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.558789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.558934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.558961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.559111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.559140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.559265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.559295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.559480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.559509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.559699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.559726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.559868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.559915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.560081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.560114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.560219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.560246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.560424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.560451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.560600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.560648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.560809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.560839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.561026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.561053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.561200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.561228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.561376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.561406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.561593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.561623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.279 [2024-10-28 05:11:43.561794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.279 [2024-10-28 05:11:43.561821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.279 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.561959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.561995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.562135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.562162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.562295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.562321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.562511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.562541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.562692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.562720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.562825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.562853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.563010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.563046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.563242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.563269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.563409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.563436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.563595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.563625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.563776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.563811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.563956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.563985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.564109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.564144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.564282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.564309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.564495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.564522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.564660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.564688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.564900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.564927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.565086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.565126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.565305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.565332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.565481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.565508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.565712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.565740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.565904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.565931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.566130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.566161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.566298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.566328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.566513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.566540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.566700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.566731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.566876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.566906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.567093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.567123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.567286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.567313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.567506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.567536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.567675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.567702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.567843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.567870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.568031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.568058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.568315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.568373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.568549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.568576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.568758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.568802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.568965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.568992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.569109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.569163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.280 [2024-10-28 05:11:43.569299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.280 [2024-10-28 05:11:43.569329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.280 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.569485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.569515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.569689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.569717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.569903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.569933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.570092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.570121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.570274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.570304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.570466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.570492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.570607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.570653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.570776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.570810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.570984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.571014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.571177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.571203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.571345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.571372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.571529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.571559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.571739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.571770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.571938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.571966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.572112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.572139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.572329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.572358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.572510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.572540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.572729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.572757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.572926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.572955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.573112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.573141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.573320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.573350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.573510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.573537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.573728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.573759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.573962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.573995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.574173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.574203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.574346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.574372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.574551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.574578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.574695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.574723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.574837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.574864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.574999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.575025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.575190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.575235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.575417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.575446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.575573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.575604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.575785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.575824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.575939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.575993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.576150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.576180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.576372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.576400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.576543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.576602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.576765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.576796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.576915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.576944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.577119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.281 [2024-10-28 05:11:43.577147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.281 qpair failed and we were unable to recover it. 00:35:53.281 [2024-10-28 05:11:43.577300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.577345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.577486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.577513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.577649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.577678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.577805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.577833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.577991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.578036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.578203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.578230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.578376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.578423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.578564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.578597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.578806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.578852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.579013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.579057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.579194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.579250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.579422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.579449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.579588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.579615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.579818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.579864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.580049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.580095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.580225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.580269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.580411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.580440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.580610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.580643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.580779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.580806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.580981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.581026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.581142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.581170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.581319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.581366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.581532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.581559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.581723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.581772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.581964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.582009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.582172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.582216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.582358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.582388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.582543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.582570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.582708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.582753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.582921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.582966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.583094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.583139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.583297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.583339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.583470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.583500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.583692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.583724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.583897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.583939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.584105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.584134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.584297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.584325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.584472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.584499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.584645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.584684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.584797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.584825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.584988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.585018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.585156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.585201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.585350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.585379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.585535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.585564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.282 [2024-10-28 05:11:43.585721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.282 [2024-10-28 05:11:43.585762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.282 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.585934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.585967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.586164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.586222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.586352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.586384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.586531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.586559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.586707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.586736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.586905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.586932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.587091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.587133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.587293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.587326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.587485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.587515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.587685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.587714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.587827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.587854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.588013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.588044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.588219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.588249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.588432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.588462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.588603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.588630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.588803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.588844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.589037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.589069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.589254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.589284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.589466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.589496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.589661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.589707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.589875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.589902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.590082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.590113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.590240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.590269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.590449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.590478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.590722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.590751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.590915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.590965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.591138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.591186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.591301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.591331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.591465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.591493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.591632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.591683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.591819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.591846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.591992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.592023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.592165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.592193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.592334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.592362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.592525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.592553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.592706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.592733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.592880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.592908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.593059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.593086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.593205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.593232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.593394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.593425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.593586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.593613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.593771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.593801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.283 qpair failed and we were unable to recover it. 00:35:53.283 [2024-10-28 05:11:43.593959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.283 [2024-10-28 05:11:43.593989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.594263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.594313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.594558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.594610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.594783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.594811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.594960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.594990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.595142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.595173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.595296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.595326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.595475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.595505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.595696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.595723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.595827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.595854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.596001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.596046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.596209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.596237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.596385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.596414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.596600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.596626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.596776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.596807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.596952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.596995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.597150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.597179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.597394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.597425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.597543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.597572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.597698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.597725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.597836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.597864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.598012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.598038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.598198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.598227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.598386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.598418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.598579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.598609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.598776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.598804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.598958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.598989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.599174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.599201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.599373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.599407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.599630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.599692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.599814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.599841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.600025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.600056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.600242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.600272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.600427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.600457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.600645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.600699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.600840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.600868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.601060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.601090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.601428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.601479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.601702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.601730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.601839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.601866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.602004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.602047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.284 [2024-10-28 05:11:43.602296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.284 [2024-10-28 05:11:43.602330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.284 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.602510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.602541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.602738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.602765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.602959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.602989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.603139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.603179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.603414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.603479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.603618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.603656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.603842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.603868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.604120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.604150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.604281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.604312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.604462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.604492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.604651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.604696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.604836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.604863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.604981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.605026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.605272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.605303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.605458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.605489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.605680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.605708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.605820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.605847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.605975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.606002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.606146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.606176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.606315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.606348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.606526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.606556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.606687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.606714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.606877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.606926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.607081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.607110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.607262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.607292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.607423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.607457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.607646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.607697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.607814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.607841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.607979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.608023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.608179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.608245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.608411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.608458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.608644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.608672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.608811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.608838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.609023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.609052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.609231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.609262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.609415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.609446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.609600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.609629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.609802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.609829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.609998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.610025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.610263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.610317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.610505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.610536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.285 qpair failed and we were unable to recover it. 00:35:53.285 [2024-10-28 05:11:43.610685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.285 [2024-10-28 05:11:43.610739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.610884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.610911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.611085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.611113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.611274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.611301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.611420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.611447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.611623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.611676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.611858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.611886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.612036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.612068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.612260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.612287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.612435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.612462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.612638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.612665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.612813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.612839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.613007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.613033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.613246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.613276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.613433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.613463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.613613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.613659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.613824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.613850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.613965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.613991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.614103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.614129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.614238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.614265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.614383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.614411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.614551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.614577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.614771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.614798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.614947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.614990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.615177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.615202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.615370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.615401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.615584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.615612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.615765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.615791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.615935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.615961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.616083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.616109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.616253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.616279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.616439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.616469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.616625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.616657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.616772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.616798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.616981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.617010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.617165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.617193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.617333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.617359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.617494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.617520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.617679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.617706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.617873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.617899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.618065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.618092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.618257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.618283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.618422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.618452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.286 [2024-10-28 05:11:43.618594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.286 [2024-10-28 05:11:43.618623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.286 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.618773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.618803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.618951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.618977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.619180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.619209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.619354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.619382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.619569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.619595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.619748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.619774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.619914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.619940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.620101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.620128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.620229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.620255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.620417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.620447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.620592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.620619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.620745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.620772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.620886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.620915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.621058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.621084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.621253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.621283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.621435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.621464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.621646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.621682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.621847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.621874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.622020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.622052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.622203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.622232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.622389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.622415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.622524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.622550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.622757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.622783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.622981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.623010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.623161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.623188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.623329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.623372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.623489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.623518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.623697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.623726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.623852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.623877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.624069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.624100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.624249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.624279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.624431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.624461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.624620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.624653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.624804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.624849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.624980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.625005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.625138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.625164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.625337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.625368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.625511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.625537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.625679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.625706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.625878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.625915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.626030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.626056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.626187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.626213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.626329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.626355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.287 [2024-10-28 05:11:43.626498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.287 [2024-10-28 05:11:43.626524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.287 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.626685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.626712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.626823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.626851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.626991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.627017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.627156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.627182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.627327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.627355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.627502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.627529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.627653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.627690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.627806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.627832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.627982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.628009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.628116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.628143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.628281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.628308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.628447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.628473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.628613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.628645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.628788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.628815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.628953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.628980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.629125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.629153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.629298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.629325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.629491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.629516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.629653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.629687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.629829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.629860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.630031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.630057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.630193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.630219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.630354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.630382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.630551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.630577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.630720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.630747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.630884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.630910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.631085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.631111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.631243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.631269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.631416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.631443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.631580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.631607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.631756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.631782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.631924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.631950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.632068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.632095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.632214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.632241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.632405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.632434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.632601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.632627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.632779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.632806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.632944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.632970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.633134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.633161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.633300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.633326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.633466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.633493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.288 qpair failed and we were unable to recover it. 00:35:53.288 [2024-10-28 05:11:43.633608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.288 [2024-10-28 05:11:43.633643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.633800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.633827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.633973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.633999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.634166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.634192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.634350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.634376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.634542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.634568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.634722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.634749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.634864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.634892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.635059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.635085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.635229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.635256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.635402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.635428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.635566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.635592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.635838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.635866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.636037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.636064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.636227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.636253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.636394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.636421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.636564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.636591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.636773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.636800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.636968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.636994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.637138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.637164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.637298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.637325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.637467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.637493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.637601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.637630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.637754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.637781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.637923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.637950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.638083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.638110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.638232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.638258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.638421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.638447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.638586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.638612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.638759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.638788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.638937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.638963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.639103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.639129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.639241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.639268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.639442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.639468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.639608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.639642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.639750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.639776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.639912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.639937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.640086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.640113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.289 [2024-10-28 05:11:43.640227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.289 [2024-10-28 05:11:43.640253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.289 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.640371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.640410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.640571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.640601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.640734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.640761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.640938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.640964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.641104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.641131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.641269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.641296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.641460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.641488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.641629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.641666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.641786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.641813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.641980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.642007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.642158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.642185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.642359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.642386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.642528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.642555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.642669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.642697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.642842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.642868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.643035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.643061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.643204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.643230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.643366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.643395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.643553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.643586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.643750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.643777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.643914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.643942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.644083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.644111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.644278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.644305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.644465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.644494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.644657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.644685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.644797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.644824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.644962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.644989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.645151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.645177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.646154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.646188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.646369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.646396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.646541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.646568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.646714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.646742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.646874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.646901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.647045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.647078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.647227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.647258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.647374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.647401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.647544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.647570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.647735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.647763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.647880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.647906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.648023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.648049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.648191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.290 [2024-10-28 05:11:43.648222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.290 qpair failed and we were unable to recover it. 00:35:53.290 [2024-10-28 05:11:43.648392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.648421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.648584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.648614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.648785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.648812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.648944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.648988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.649157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.649184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.649326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.649351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.649515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.649545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.649721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.649750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.649891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.649917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.650031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.650073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.650216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.650244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.650411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.650437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.650570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.650596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.650735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.650765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.650875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.650901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.651071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.651100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.651259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.651285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.651411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.651437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.651556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.651582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.651700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.651729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.651849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.651876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.651999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.652026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.652222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.652248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.652413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.652440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.652553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.652579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.652697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.652724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.652877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.652904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.653091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.653120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.653259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.653286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.653399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.653426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.653593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.653623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.653798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.653824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.653964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.653990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.654103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.654146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.654303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.654333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.654495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.654524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.654675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.654702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.654817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.654844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.654977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.655006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.655194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.655220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.655382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.655409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.655541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.655570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.655765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.655793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.655910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.655957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.291 qpair failed and we were unable to recover it. 00:35:53.291 [2024-10-28 05:11:43.656091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.291 [2024-10-28 05:11:43.656119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.656285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.656312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.656481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.656527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.656693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.656724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.656897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.656924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.657088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.657114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.657286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.657316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.657493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.657522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.657691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.657718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.657858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.657885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.658050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.658079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.658232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.658261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.658399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.658426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.658595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.658647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.658786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.658812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.658971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.659000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.659172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.659198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.659341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.659371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.659515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.659542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.659733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.659761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.659877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.659905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.660048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.660091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.660226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.660255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.660411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.660440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.660595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.660621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.660773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.660800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.660949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.660976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.661087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.661113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.661261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.661288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.661424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.661467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.661655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.661700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.661840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.661866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.662004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.662030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.662151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.662197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.662352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.662382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.662537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.662564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.662707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.662734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.662899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.662925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.663039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.663065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.663228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.663254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.663396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.663423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.663570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.663597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.663806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.663833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.292 [2024-10-28 05:11:43.663971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.292 [2024-10-28 05:11:43.663998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.292 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.664133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.664163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.664330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.664357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.664526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.664552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.664703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.664730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.664896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.664922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.665056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.665083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.665193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.665219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.665354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.665381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.665547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.665577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.665767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.665805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.665929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.665959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.666082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.666111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.666334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.666361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.666484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.666510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.666657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.666684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.666833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.666860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.666998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.667025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.667164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.667190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.667304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.667331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.667479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.667505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.667649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.667676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.667816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.667843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.667951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.667977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.668147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.668173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.668336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.668361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.668477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.668503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.668644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.668670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.668813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.668845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.668959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.668987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.669134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.669160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.669270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.669296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.669406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.669433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.669539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.669565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.669737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.669767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.669935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.669961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.670103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.670130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.670270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.670296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.670446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.293 [2024-10-28 05:11:43.670472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.293 qpair failed and we were unable to recover it. 00:35:53.293 [2024-10-28 05:11:43.670619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.670665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.670782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.670809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.670948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.670974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.671113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.671140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.671278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.671305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.671474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.671501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.671613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.671646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.671788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.671815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.671930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.671957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.672069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.672096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.672213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.672240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.672382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.672407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.672546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.672575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.672705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.672732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.672876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.672902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.673042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.673069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.673181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.673212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.673358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.673385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.673492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.673518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.673640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.673667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.673815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.673842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.674012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.674041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.674184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.674210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.674375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.674401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.674532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.674558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.674698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.674726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.674869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.674896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.675015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.675042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.675180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.675207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.675381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.675407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.675550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.675576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.675715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.675742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.675875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.675902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.676038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.676064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.676229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.676259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.676377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.676406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.676543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.676568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.676738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.676765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.676905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.676932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.677045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.677071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.677212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.677237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.677382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.294 [2024-10-28 05:11:43.677409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.294 qpair failed and we were unable to recover it. 00:35:53.294 [2024-10-28 05:11:43.677551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.677577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.677691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.677722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.677841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.677868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.678036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.678063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.678202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.678228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.678382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.678409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.678560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.678586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.678727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.678755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.678902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.678928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.679047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.679073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.679214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.679241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.679351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.679377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.679518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.679544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.679661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.679688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.679851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.679877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.680035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.680064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.680298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.680355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.680540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.680570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.680745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.680772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.680933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.680962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.681137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.681185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.681348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.681378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.681541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.681571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.681713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.681741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.681854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.681880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.682041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.682084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.682367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.682420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.682585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.682611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.682746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.682776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.682899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.682926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.683030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.683073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.683229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.683258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.683419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.683447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.683613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.683647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.683764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.683792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.683947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.683977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.684109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.684138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.684301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.684331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.684458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.684489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.684648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.684675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.684821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.684847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.684984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.685013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.685141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.295 [2024-10-28 05:11:43.685171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.295 qpair failed and we were unable to recover it. 00:35:53.295 [2024-10-28 05:11:43.685351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.685381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.685510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.685539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.685737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.685765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.685903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.685946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.686141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.686167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.686327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.686356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.686494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.686521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.686663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.686690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.686801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.686827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.686971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.686997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.687163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.687192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.687369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.687398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.687568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.687598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.687776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.687803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.687921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.687947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.688111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.688137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.688371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.688401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.688553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.688582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.688757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.688784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.688924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.688950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.689134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.689161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.689325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.689356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.689514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.689546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.689687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.689715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.689828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.689854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.690011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.690041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.690196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.690225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.690409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.690438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.690609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.690656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.690801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.690829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.690990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.691036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.691227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.691256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.691429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.691499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.691648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.691677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.691813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.691843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.692048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.692093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.692255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.692298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.692444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.692471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.692644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.692691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.692837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.692880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.693024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.693069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.693181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.296 [2024-10-28 05:11:43.693207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.296 qpair failed and we were unable to recover it. 00:35:53.296 [2024-10-28 05:11:43.693324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.693351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.693492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.693518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.693649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.693676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.693833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.693877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.694067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.694112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.694253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.694296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.694462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.694489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.694630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.694670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.694836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.694881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.695082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.695109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.695243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.695270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.695411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.695438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.695615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.695652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.695785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.695830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.695962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.695991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.696168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.696213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.696351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.696378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.696519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.696546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.696663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.696690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.696819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.696863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.697000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.697027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.697171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.697198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.697360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.697387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.697547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.697573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.697732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.697781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.697947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.697992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.698129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.698173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.698300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.698326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.698488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.698523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.698695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.698722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.698833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.698860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.699001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.699028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.699173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.699199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.699374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.699401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.699565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.699591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.699734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.699765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.699920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.699949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.700097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.700141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.297 [2024-10-28 05:11:43.700288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.297 [2024-10-28 05:11:43.700316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.297 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.700450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.700477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.700648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.700676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.700852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.700897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.701056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.701104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.701226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.701252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.701402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.701428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.701577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.701604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.701735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.701780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.701911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.701941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.702152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.702197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.702376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.702402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.702566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.702593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.702764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.702793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.702945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.702989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.703154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.703198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.703360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.703386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.703492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.703519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.703695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.703725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.703889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.703916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.704100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.704145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.704314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.704340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.704483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.704509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.704680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.704707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.704830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.704856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.705002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.705028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.705168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.705200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.705395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.705422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.705589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.705615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.705787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.705814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.705929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.705958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.706128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.706155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.706269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.706298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.706464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.706491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.706649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.706693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.706861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.706906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.707069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.707112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.707266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.707293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.707463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.707490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.707615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.707648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.707789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.707815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.707957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.707983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.708129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.708157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.708327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.708353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.708499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.708526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.298 qpair failed and we were unable to recover it. 00:35:53.298 [2024-10-28 05:11:43.708680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.298 [2024-10-28 05:11:43.708710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.708860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.708889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.709046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.709092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.709235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.709261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.709436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.709462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.709567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.709593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.709750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.709778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.709942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.709987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.710102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.710129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.710285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.710311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.710453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.710480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.710620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.710664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.710798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.710842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.711006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.711050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.711223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.711267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.711382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.711408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.711573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.711600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.711744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.711773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.711931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.711975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.712137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.712180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.712323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.712349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.712458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.712490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.712640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.712667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.712809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.712835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.712995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.713022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.713158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.713185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.713325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.713352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.713526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.713553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.713715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.713742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.713883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.713910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.714083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.714110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.714248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.714275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.714393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.714421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.714590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.714616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.714786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.714836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.715016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.715043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.715159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.715186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.715354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.715380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.715549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.715576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.715762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.715808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.715968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.716011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.716256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.716319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.716457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.716484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.299 qpair failed and we were unable to recover it. 00:35:53.299 [2024-10-28 05:11:43.716663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.299 [2024-10-28 05:11:43.716707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.716835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.716879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.717042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.717086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.717257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.717301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.717419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.717446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.717615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.717649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.717760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.717787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.717931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.717976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.718115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.718159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.718324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.718369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.718506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.718533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.718676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.718703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.718853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.718897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.719039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.719082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.719225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.719251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.719425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.719452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.719593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.719620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.719784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.719829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.720027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.720075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.720200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.720245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.720359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.720386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.720529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.720555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.720715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.720759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.720873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.720899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.721014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.721041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.721186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.721212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.721347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.721374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.721545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.721572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.721728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.721755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.721894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.721920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.722077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.722121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.722292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.722318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.722457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.722484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.722597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.722625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.722808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.722853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.723015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.723058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.723223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.723251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.723385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.723413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.723553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.723579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.723717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.723762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.723927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.723971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.724104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.724148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.724286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.724313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.300 qpair failed and we were unable to recover it. 00:35:53.300 [2024-10-28 05:11:43.724486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.300 [2024-10-28 05:11:43.724513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.724628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.724660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.724807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.724852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.725014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.725040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.725159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.725187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.725332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.725358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.725526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.725553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.725716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.725761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.725926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.725970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.726126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.726170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.726315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.726341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.726476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.726502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.726648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.726676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.726798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.726827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.727002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.727047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.727185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.727218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.727343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.727381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.727560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.727587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.727744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.727792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.727927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.727971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.728169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.728212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.728338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.728365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.728500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.728526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.728716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.728761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.728906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.728933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.729052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.729078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.729222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.729249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.729419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.729445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.729581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.729607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.729787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.729814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.729935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.729980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.730138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.730182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.730299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.730326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.730467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.730494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.730630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.730665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.730827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.730872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.731034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.301 [2024-10-28 05:11:43.731077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.301 qpair failed and we were unable to recover it. 00:35:53.301 [2024-10-28 05:11:43.731246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.731273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.731388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.731415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.731548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.731574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.731740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.731786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.731923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.731968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.732143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.732191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.732320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.732347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.732511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.732544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.732707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.732737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.732880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.732929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.733047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.733074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.733220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.733247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.733418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.733446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.733588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.733614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.733790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.733835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.734045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.734089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.734271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.734298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.734444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.734471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.734607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.734651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.734813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.734860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.735006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.735033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.735158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.735184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.735300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.735329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.735472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.735500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.735614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.735646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.735766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.735792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.735931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.735981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.736148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.736174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.736350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.736379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.736499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.736526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.736642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.736677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.736798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.736825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.736953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.736980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.737097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.737124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.737263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.737291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.737443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.737471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.737606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.737639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.737769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.737795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.737903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.737930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.738098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.738124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.738267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.738295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.738460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.738486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.738625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.738658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.738776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.738803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.738974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.739001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.739191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.739235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.739404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.739431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.302 [2024-10-28 05:11:43.739573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-10-28 05:11:43.739599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.302 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.739762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.739809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.739965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.739995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.740176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.740219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.740361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.740390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.740541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.740570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.740693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.740720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.740837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.740864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.740973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.740999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.741137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.741164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.741303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.741330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.741451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.741482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.741591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.741617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.741774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.741803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.741917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.741944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.742089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.742117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.742282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.742309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.742481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.742508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.742648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.742675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.742812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.742858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.743035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.743063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.743181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.743208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.743372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.743399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.743537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.743563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.743710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.743738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.743855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.743882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.744001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.744028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.744162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.744189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.744301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.744329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.744501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.744528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.744661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.744688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.744807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.744833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.745004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.745031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.745172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.745200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.745365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.745391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.745532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.745564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.745723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.745769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.745898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.745925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.303 [2024-10-28 05:11:43.746063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-10-28 05:11:43.746108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.303 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.746248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.746274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.746415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.746449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.746582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.746609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.746767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.746812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.746967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.746995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.747154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.747184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.747323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.747351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.747489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.747515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.747659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.747686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.747841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.747887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.748072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.748116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.748252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.748280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.748444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.748474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.748609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.748649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.748824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.748869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.749072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.749123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.749310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.749337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.749477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.749503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.749619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.749654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.749817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.749846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.749995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.750040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.750176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.750203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.750319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.750346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.750484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.750511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.750692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.750719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.750884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.750910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.751096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.751123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.751235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.751263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.751408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.751434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.751554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.751581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.751744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.751789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.751951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.751995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.752131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.752174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.752308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.752335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.752456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.752486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.752625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.752670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.752805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.752849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.753047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-10-28 05:11:43.753091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.304 qpair failed and we were unable to recover it. 00:35:53.304 [2024-10-28 05:11:43.753234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.753261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.753432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.753458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.753573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.753600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.753728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.753756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.753874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.753900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.754037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.754063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.754228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.754255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.754393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.754420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.754562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.754589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.754737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.754764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.754880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.754906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.755044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.755078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.755193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.755220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.755360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.755387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.755552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.755583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.755710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.755756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.755915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.755960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.756126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.756170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.756308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.756335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.756476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.756502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.756619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.756651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.756798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.756842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.757008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.757052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.757188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.757215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.757359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.757385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.757528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.757554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.757726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.757771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.757958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.758002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.758146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.758189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.758341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.758373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.758512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.758540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.758704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.758752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.758921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.758954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.759082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.759117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.759248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.759290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.759455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.759482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.759645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.759683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.759824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.759851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.760000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.760027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.760182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.760218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.760382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.760408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.760547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.760579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.760726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.760752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.760898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.760925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.761048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.761075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.761193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.761220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.761371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.761400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.305 qpair failed and we were unable to recover it. 00:35:53.305 [2024-10-28 05:11:43.761540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.305 [2024-10-28 05:11:43.761566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.761725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.761752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.761948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.761978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.762152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.762192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.762386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.762413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.762533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.762561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.762711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.762740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.762898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.762927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.763120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.763150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.763324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.763349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.763489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.763525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.763674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.763702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.763847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.763875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.763989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.764014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.764152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.764178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.764320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.764346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.764455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.764480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.764624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.764659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.764809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.764835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.764986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.765013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.765173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.765198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.765340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.765370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.765504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.765529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.765648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.765675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.765794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.765820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.765934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.765959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.766110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.766135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.766308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.766341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.766478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.766507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.766617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.766651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.766800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.766829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.766969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.766995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.767161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.767190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.767374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.767400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.767511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.767537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.767718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.767748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.767893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.767933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.768079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.768109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.768263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.768289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.768432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.768459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.768600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.768629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.768799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.768829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.769018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.769047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.769233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.769260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.769398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.769425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.769556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.769582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.769691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.769734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.769863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.769907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.770055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.770084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.770279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.770305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.770468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.770495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.770643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.770687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.770838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.770866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.771026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.306 [2024-10-28 05:11:43.771055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.306 qpair failed and we were unable to recover it. 00:35:53.306 [2024-10-28 05:11:43.771200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.771227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.771349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.771376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.771511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.771538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.771653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.771699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.771850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.771878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.772068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.772098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.772219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.772246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.772390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.772417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.772597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.772623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.772776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.772806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.772961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.772987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.773121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.773148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.773260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.773288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.773456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.773482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.773623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.773665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.773787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.773814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.773965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.773995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.774150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.774177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.774315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.774342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.774481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.774507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.774644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.774687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.774839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.774869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.775037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.775064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.775208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.775235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.775407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.775433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.775559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.775586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.775768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.775798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.775965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.775993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.776155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.776184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.776331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.776359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.776528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.776555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.776684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.776729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.776884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.776920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.777118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.777188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.777391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.777417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.777563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.777594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.777763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.777793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.777939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.777967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.778207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.778257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.778447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.778473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.778643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.778686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.778847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.778876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.779072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.779107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.779279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.779305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.779440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.779469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.779586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.779613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.779791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.779821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.779954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.779981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.780135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.780164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.780327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.780355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.780504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.780532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.780676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.780702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.780865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.780891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.781013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.307 [2024-10-28 05:11:43.781040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.307 qpair failed and we were unable to recover it. 00:35:53.307 [2024-10-28 05:11:43.781146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.781172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.781290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.781316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.781438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.781465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.781606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.781632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.781780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.781807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.781924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.781950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.782091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.782117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.782259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.782285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.782461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.782491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.782609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.782642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.782784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.782811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.782937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.782963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.783131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.783157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.783299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.783325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.783468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.783494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.783647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.783674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.783822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.783848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.783992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.784018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.784156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.784182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.784342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.784368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.784536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.784562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.784689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.784716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.784861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.784887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.785037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.785064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.785228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.785254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.785394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.785419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.785590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.785616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.785847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.785875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.786030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.786058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.786201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.786226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.786364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.786391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.786533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.786559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.786703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.786729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.786841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.786867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.786989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.787015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.787191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.787224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.787347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.787374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.787474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.787500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.787644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.787671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.787791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.787817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.787958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.787984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.788090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.788117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.788262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.788288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.788416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.788444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.788582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.788609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.788758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.788785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.788900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.788927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.789064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.789090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.308 [2024-10-28 05:11:43.789202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.308 [2024-10-28 05:11:43.789229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.308 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.789373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.789401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.789543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.789568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.789728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.789755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.789865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.789891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.790056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.790097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.790274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.790315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.790481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.790511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.790693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.790720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.790859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.790885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.791065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.791092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.791211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.791238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.791352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.791381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.791517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.791543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.791658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.791685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.791811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.791837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.792013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.792039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.792176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.792202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.792311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.792337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.792478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.792504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.792649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.792675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.792793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.792819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.792958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.792985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.793131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.793163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.793302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.793331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.793448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.793474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.793621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.793655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.793773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.793800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.793959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.793985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.794123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.794149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.794250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.794275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.794403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.794431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.794601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.794627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.794771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.794798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.794918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.794945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.795103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.795130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.795301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.795336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.795472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.795501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.795676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.795704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.795813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.795841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.795984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.796010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.796176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.796203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.796322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.796348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.796468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.796494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.796673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.796701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.796816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.796843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.796957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.796984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.797140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.797166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.797349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.797375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.797539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.797573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.797726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.797753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.797890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.797916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.798082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.798108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.798274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.798300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.309 qpair failed and we were unable to recover it. 00:35:53.309 [2024-10-28 05:11:43.798412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.309 [2024-10-28 05:11:43.798438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.798611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.798649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.798758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.798786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.798928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.798954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.799102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.799128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.799270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.799297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.799419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.799445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.799567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.799594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.799744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.799771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.799913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.799942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.800112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.800139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.800252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.800282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.800407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.800434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.800595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.800622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.800772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.800799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.800979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.801005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.801175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.801203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.801323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.801349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.801466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.801492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.801631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.801666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.801778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.801804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.801921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.801947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.802063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.802090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.802214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.802239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.802362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.802388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.802502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.802529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.802698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.802725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.802867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.802893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.803046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.803079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.803198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.803224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.803392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.803418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.803565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.803591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.803733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.803760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.803929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.803955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.804058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.804086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.804207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.804234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.804372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.804398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.804538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.804565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.804669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.804696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.804864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.804890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.805029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.805056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.805214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.805241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.805390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.805416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.805565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.805592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.805715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.805742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.805886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.805912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.806025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.806053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.806195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.806222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.806358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.806383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.806560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.806587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.806741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.806768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.806916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.806943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.807077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.807103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.807249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.807275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.310 qpair failed and we were unable to recover it. 00:35:53.310 [2024-10-28 05:11:43.807410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.310 [2024-10-28 05:11:43.807438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.807582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.807610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.807764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.807794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.807961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.807989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.808135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.808162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.808302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.808328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.808443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.808472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.808641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.808668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.808832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.808858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.809009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.809035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.809176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.809202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.809326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.809352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.809490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.809516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.809655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.809682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.809819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.809844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.809987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.810016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.810182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.810212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.810357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.810385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.810497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.810524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.810664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.810692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.810837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.810863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.811005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.811031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.811196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.811223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.811391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.811418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.811564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.811591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.811734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.811761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.811883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.811909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.812048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.812075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.812212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.812240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.812382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.812409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.812577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.812603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.812722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.812748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.812893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.812919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.813080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.813110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.813309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.813339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.813526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.813553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.813691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.813718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.813837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.813863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.814002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.814028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.814197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.814224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.814391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.814417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.814554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.814582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.814701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.814733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.814848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.814874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.815009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.815035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.815179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.815206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.815348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.815374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.815546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.815573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.815712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.815740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.815857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.311 [2024-10-28 05:11:43.815883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.311 qpair failed and we were unable to recover it. 00:35:53.311 [2024-10-28 05:11:43.816049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.816075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.816208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.816234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.816399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.816425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.816535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.816562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.816705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.816732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.816876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.816903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.817014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.817041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.817153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.817179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.817320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.817346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.817487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.817513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.817654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.817681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.817798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.817825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.817960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.817986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.818122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.818149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.818283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.818311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.818452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.818479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.818611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.818642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.818797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.818825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.818985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.819015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.819219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.819253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.819405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.819434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.819616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.819651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.819814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.819840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.819980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.820006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.820150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.820177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.820293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.820320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.820490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.820517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.820638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.820666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.820808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.820835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.820973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.820999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.821162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.821188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.821332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.821359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.821499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.821525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.821699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.821726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.821861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.821887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.822029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.822056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.822195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.822222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.822359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.822386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.822532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.822559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.822676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.822703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.312 [2024-10-28 05:11:43.822839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.312 [2024-10-28 05:11:43.822865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.312 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.823002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.823028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.823159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.823185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.823298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.823325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.823437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.823465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.823610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.823645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.823789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.823820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.823964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.823990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.824124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.824150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.824294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.824320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.824458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.824484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.824603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.824630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.824780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.824806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.824949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.824976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.825090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.825120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.825263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.825291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.825458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.825484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.825652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.825679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.825793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.825820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.825955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.825981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.826121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.826148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.826305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.826334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.826448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.826474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.826612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.826643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.826784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.826811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.826949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.826976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.827109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.827135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.827244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.827273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.827385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.827411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.827524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.827552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.827690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.827718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.827828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.827854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.827987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.828013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.828181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.828206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.828377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.828403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.828545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.828572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.828723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.828750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.828917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.828943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.829096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.829125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.829281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.829307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.829423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.829450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.829587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.829613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.829771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.829798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.829927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.829957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.830110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.830139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.830300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.830327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.830442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.830469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.830594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.830621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.830767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.830794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.830932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.830959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.831145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.831174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.831328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.831358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.831550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.831576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.831691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.313 [2024-10-28 05:11:43.831718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.313 qpair failed and we were unable to recover it. 00:35:53.313 [2024-10-28 05:11:43.831883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.831909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.832050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.832094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.832217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.832247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.832411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.832438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.832585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.832610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.832751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.832778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.832888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.832913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.833100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.833126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.833293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.833319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.833430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.833457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.833649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.833679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.833841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.833867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.834010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.834036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.834177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.834203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.834344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.834370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.834517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.834543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.834694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.834720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.834892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.834936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.835131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.835157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.835266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.835292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.835404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.835434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.835576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.835603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.835741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.835771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.835912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.835939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.836130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.836159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.836330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.836356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.836489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.836515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.836680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.836708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.836825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.836853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.837056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.837083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.837220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.837248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.837419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.837445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.837618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.837654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.837789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.837818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.837974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.838003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.838156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.838183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.838301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.838329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.838443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.838471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.838609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.838640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.838763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.838788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.838924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.838950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.839118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.839145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.839280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.839312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.839485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.839511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.839662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.839690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.839801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.839828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.839967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.839993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.840107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.840137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.840254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.840280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.840421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.840448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.840578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.840621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.840797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.840824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.840980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.841010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.841194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.314 [2024-10-28 05:11:43.841223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.314 qpair failed and we were unable to recover it. 00:35:53.314 [2024-10-28 05:11:43.841377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-10-28 05:11:43.841407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-10-28 05:11:43.841563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-10-28 05:11:43.841589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.315 [2024-10-28 05:11:43.841731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.315 [2024-10-28 05:11:43.841758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.315 qpair failed and we were unable to recover it. 00:35:53.598 [2024-10-28 05:11:43.841900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.598 [2024-10-28 05:11:43.841926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.598 qpair failed and we were unable to recover it. 00:35:53.598 [2024-10-28 05:11:43.842053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.598 [2024-10-28 05:11:43.842080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.598 qpair failed and we were unable to recover it. 00:35:53.598 [2024-10-28 05:11:43.842218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.598 [2024-10-28 05:11:43.842245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.598 qpair failed and we were unable to recover it. 00:35:53.598 [2024-10-28 05:11:43.842354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.598 [2024-10-28 05:11:43.842380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.598 qpair failed and we were unable to recover it. 00:35:53.598 [2024-10-28 05:11:43.842496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.598 [2024-10-28 05:11:43.842523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.598 qpair failed and we were unable to recover it. 00:35:53.598 [2024-10-28 05:11:43.842625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.598 [2024-10-28 05:11:43.842658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.598 qpair failed and we were unable to recover it. 00:35:53.598 [2024-10-28 05:11:43.842798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.598 [2024-10-28 05:11:43.842825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.598 qpair failed and we were unable to recover it. 00:35:53.598 [2024-10-28 05:11:43.842936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.598 [2024-10-28 05:11:43.842963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.598 qpair failed and we were unable to recover it. 00:35:53.598 [2024-10-28 05:11:43.843108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.598 [2024-10-28 05:11:43.843134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.598 qpair failed and we were unable to recover it. 00:35:53.598 [2024-10-28 05:11:43.843275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.598 [2024-10-28 05:11:43.843302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.598 qpair failed and we were unable to recover it. 00:35:53.598 [2024-10-28 05:11:43.843439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.598 [2024-10-28 05:11:43.843466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.598 qpair failed and we were unable to recover it. 00:35:53.598 [2024-10-28 05:11:43.843571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.598 [2024-10-28 05:11:43.843597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.598 qpair failed and we were unable to recover it. 00:35:53.598 [2024-10-28 05:11:43.843722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.598 [2024-10-28 05:11:43.843749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.598 qpair failed and we were unable to recover it. 00:35:53.598 [2024-10-28 05:11:43.843860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.598 [2024-10-28 05:11:43.843886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.598 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.844032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.844058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.844197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.844223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.844371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.844398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.844500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.844526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.844663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.844702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.844859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.844918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.845120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.845168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.845282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.845310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.845448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.845475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.845596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.845624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.845762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.845808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.845975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.846019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.846148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.846197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.846337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.846364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.846473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.846499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.846647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.846675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.846815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.846841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.847015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.847042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.847185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.847213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.847359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.847386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.847499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.847526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.847671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.847699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.847819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.847845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.847970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.847996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.848118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.848145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.848254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.848280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.848450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.848477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.848599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.848626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.848822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.848863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.849012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.849039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.849213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.849249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.849395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.849423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.849559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.849585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.849693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.849719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.849837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.849864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.849996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.850025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.850232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.850261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.850414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.850446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.599 [2024-10-28 05:11:43.850569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.599 [2024-10-28 05:11:43.850597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.599 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.850781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.850809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.850946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.850974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.851129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.851158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.851316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.851345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.851501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.851530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.851679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.851707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.851846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.851873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.852066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.852095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.852272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.852301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.852456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.852484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.852663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.852705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.852873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.852899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.853074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.853104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.853255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.853284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.853438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.853467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.853619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.853655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.853789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.853816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.853982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.854009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.854168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.854202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.854337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.854368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.854519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.854550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.854712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.854739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.854877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.854903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.855024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.855051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.855210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.855239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.855397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.855427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.855592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.855618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.855766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.855792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.855956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.855983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.856144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.856174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.856299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.856329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.856494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.856525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.856669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.856695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.856834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.856861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.857018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.857047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.857197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.857226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.857380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.857409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.857538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.857564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.857708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.857735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.857880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.600 [2024-10-28 05:11:43.857906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.600 qpair failed and we were unable to recover it. 00:35:53.600 [2024-10-28 05:11:43.858031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.858060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.858224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.858249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.858393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.858418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.858585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.858628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.858744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.858771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.858909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.858936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.859081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.859107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.859278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.859305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.859448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.859474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.859641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.859667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.859819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.859849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.860034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.860064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.860255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.860282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.860428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.860454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.860559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.860585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.860786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.860814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.860971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.861000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.861158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.861185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.861350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.861377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.861579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.861608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.861779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.861807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.861939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.861965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.862103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.862147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.862271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.862300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.862477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.862507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.862671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.862699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.862809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.862836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.862994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.863036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.863170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.863196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.863358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.863385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.863493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.863538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.863672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.863703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.863862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.863891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.864061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.864088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.864274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.864303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.864473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.864500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.864668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.864713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.864876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.864904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.865023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.865068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.601 qpair failed and we were unable to recover it. 00:35:53.601 [2024-10-28 05:11:43.865226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.601 [2024-10-28 05:11:43.865267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.865407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.865433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.865569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.865596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.865737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.865763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.865907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.865951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.866093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.866122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.866305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.866331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.866445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.866477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.866590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.866617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.866735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.866761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.866875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.866900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.867040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.867083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.867248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.867277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.867407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.867436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.867598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.867623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.867771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.867798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.867950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.867979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.868170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.868196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.868361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.868387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.868517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.868546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.868738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.868765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.868940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.868966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.869104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.869130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.869272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.869297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.869475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.869501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.869645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.869672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.869832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.869857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.869966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.869993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.870133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.870159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.870319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.870347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.870487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.870513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.870676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.870702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.870899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.870928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.871075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.871104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.871253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.871283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.871402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.871429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.871594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.871620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.871771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.871796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-10-28 05:11:43.871935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.602 [2024-10-28 05:11:43.871960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.872070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.872096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.872215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.872240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.872421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.872450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.872647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.872674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.872833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.872862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.873056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.873082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.873218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.873243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.873381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.873408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.873519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.873546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.873734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.873761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.873910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.873935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.874072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.874098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.874206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.874232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.874413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.874455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.874613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.874648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.874786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.874812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.874927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.874953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.875157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.875184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.875368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.875396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.875555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.875581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.875721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.875747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.875882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.875908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.876042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.876076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.876236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.876261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.876407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.876433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.876602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.876663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.876818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.876848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.877001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.877027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.877149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.877175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.877292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.877318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.877457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.877484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.877646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.877673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.877841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.877870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.878024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.878053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.878218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.603 [2024-10-28 05:11:43.878244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-10-28 05:11:43.878382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.878408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.878550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.878575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.878711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.878737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.878875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.878901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.879103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.879128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.879278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.879319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.879447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.879488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.879631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.879661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.879797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.879823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.879939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.879981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.880136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.880164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.880344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.880373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.880528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.880555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.880711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.880740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.880922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.880951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.881147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.881173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.881306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.881331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.881442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.881467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.881607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.881642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.881828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.881854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.882020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.882046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.882152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.882197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.882373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.882400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.882532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.882558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.882700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.882726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.882896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.882922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.883084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.883113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.883268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.883297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.883456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.883482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.883602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.883629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.883809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.883835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.884021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.884050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.884172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.884198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.884361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.884386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.884522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.884550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.884712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.884741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.884905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.884931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.885074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.885099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.885277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.885307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.885425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.885451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-10-28 05:11:43.885558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.604 [2024-10-28 05:11:43.885584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.885725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.885752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.885899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.885926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.886151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.886179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.886360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.886386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.886497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.886524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.886675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.886702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.886843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.886869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.887033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.887058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.887217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.887260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.887402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.887428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.887559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.887588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.887814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.887840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.888021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.888047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.888264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.888306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.888456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.888489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.888618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.888649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.888794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.888819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.889035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.889061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.889228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.889257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.889408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.889434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.889577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.889621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.889787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.889813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.889963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.889992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.890180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.890206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.890359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.890387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.890518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.890547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.890708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.890734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.890897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.890922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.891038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.891063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.891239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.891264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.891427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.891472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.891588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.891615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.891793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.891819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.891981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.892009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.892134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.892176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.892312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.892338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.892459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.892484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.892616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.892659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.892802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.892845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.892981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.605 [2024-10-28 05:11:43.893007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.605 qpair failed and we were unable to recover it. 00:35:53.605 [2024-10-28 05:11:43.893118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.893143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.893295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.893325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.893489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.893517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.893680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.893706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.893848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.893874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.894014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.894040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.894223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.894251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.894412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.894437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.894551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.894577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.894743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.894770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.894953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.894982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.895142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.895167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.895303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.895349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.895507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.895536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.895714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.895743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.895909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.895935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.896094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.896123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.896272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.896301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.896464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.896490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.896610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.896651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.896785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.896812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.896973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.897001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.897170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.897195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.897354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.897382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.897570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.897595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.897755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.897782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.897887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.897913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.898050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.898075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.898195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.898238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.898434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.898460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.898603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.898629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.898774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.898800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.898929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.898955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.899095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.899122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.899299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.899325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.899464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.899489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.899649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.899679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.899841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.899867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.900073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.900099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.900241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.606 [2024-10-28 05:11:43.900267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.606 qpair failed and we were unable to recover it. 00:35:53.606 [2024-10-28 05:11:43.900380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.900406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.900545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.900574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.900728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.900755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.900921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.900947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.901054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.901098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.901258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.901284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.901420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.901446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.901582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.901608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.901785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.901811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.901935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.901964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.902133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.902161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.902380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.902406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.902568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.902597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.902826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.902852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.903038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.903066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.903204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.903230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.903380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.903423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.903592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.903618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.903765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.903791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.903903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.903928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.904094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.904120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.904291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.904320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.904472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.904500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.904661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.904688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.904829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.904855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.905020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.905062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.905218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.905247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.905410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.905436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.905580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.905605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.905797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.905828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.906024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.906052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.906241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.906267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.906375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.906400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.906578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.906620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.906785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.906811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.607 [2024-10-28 05:11:43.906914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.607 [2024-10-28 05:11:43.906939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.607 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.907077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.907102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.907311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.907337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.907502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.907528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.907695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.907721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.907835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.907861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.908000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.908026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.908151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.908180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.908370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.908396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.908563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.908592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.908777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.908803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.908920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.908945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.909085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.909111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.909229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.909255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.909397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.909424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.909607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.909643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.909803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.909829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.910012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.910040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.910241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.910267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.910384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.910409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.910542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.910567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.910675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.910706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.910870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.910899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.911054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.911082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.911223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.911250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.911386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.911412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.911550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.911577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.911718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.911744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.911883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.911908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.912043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.912069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.912187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.912213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.912351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.912377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.912513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.912539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.912679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.912721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.912871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.912900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.913054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.913083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.913243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.913269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.913409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.913435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.913577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.913603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.913752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.913779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.913887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.608 [2024-10-28 05:11:43.913914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.608 qpair failed and we were unable to recover it. 00:35:53.608 [2024-10-28 05:11:43.914061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.914104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.914260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.914306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.914447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.914472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.914652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.914678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.914794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.914820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.914965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.914991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.915153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.915182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.915340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.915371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.915513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.915557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.915725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.915751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.915861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.915887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.915998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.916024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.916164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.916206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.916353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.916379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.916510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.916535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.916705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.916731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.916849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.916875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.917029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.917055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.917190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.917218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.917381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.917406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.917543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.917568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.917714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.917741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.917856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.917883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.918022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.918048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.918231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.918260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.918408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.918451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.918588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.918614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.918761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.918787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.918917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.918943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.919120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.919148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.919267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.919295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.919454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.919481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.919616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.919668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.919857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.919883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.919999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.920027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.920169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.920196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.920359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.920403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.920546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.920572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.920711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.920738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.920888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.609 [2024-10-28 05:11:43.920914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.609 qpair failed and we were unable to recover it. 00:35:53.609 [2024-10-28 05:11:43.921021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.921063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.921237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.921263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.921402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.921427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.921578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.921604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.921772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.921798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.921904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.921930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.922116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.922145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.922273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.922298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.922442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.922468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.922608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.922639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.922789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.922815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.922925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.922950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.923085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.923128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.923288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.923316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.923427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.923469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.923619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.923650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.923791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.923816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.924020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.924046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.924188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.924214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.924415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.924441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.924576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.924602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.924808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.924834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.924994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.925023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.925184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.925210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.925372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.925397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.925534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.925562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.925725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.925751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.925891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.925917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.926035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.926065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.926208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.926234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.926372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.926398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.926536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.926561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.926717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.926747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.926890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.926919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.927059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.927085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.927200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.927230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.927347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.927374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.927483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.927509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.927647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.927674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.610 [2024-10-28 05:11:43.927817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.610 [2024-10-28 05:11:43.927843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.610 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.927952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.927978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.928117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.928160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.928299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.928325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.928441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.928468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.928584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.928610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.928737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.928763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.928950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.928979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.929146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.929173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.929331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.929359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.929546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.929575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.929730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.929757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.929871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.929896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.930037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.930080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.930235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.930263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.930440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.930468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.930603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.930629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.930800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.930826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.930986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.931015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.931170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.931198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.931356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.931382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.931526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.931567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.931735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.931762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.931902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.931932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.932130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.932156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.932273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.932298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.932435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.932461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.932601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.932629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.932779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.932805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.932941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.932966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.933149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.933175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.933317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.933358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.933543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.933568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.933724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.933753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.933904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.611 [2024-10-28 05:11:43.933933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.611 qpair failed and we were unable to recover it. 00:35:53.611 [2024-10-28 05:11:43.934083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.934112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.934270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.934295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.934465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.934491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.934688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.934717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.934874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.934903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.935065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.935091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.935272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.935301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.935451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.935480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.935629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.935663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.935842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.935868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.935989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.936032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.936211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.936241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.936353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.936382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.936531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.936557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.936740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.936770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.936927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.936956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.937114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.937144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.937330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.937356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.937505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.937533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.937713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.937742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.937891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.937920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.938078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.938104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.938213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.938239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.938378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.938404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.938555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.938584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.938750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.938776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.938887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.938912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.939105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.939134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.939283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.939312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.939480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.939506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.939685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.939715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.939867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.939896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.940046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.940074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.940235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.940262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.940404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.940430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.940613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.940647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.940768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.940796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.940926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.940951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.941094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.941119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.941257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.612 [2024-10-28 05:11:43.941283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.612 qpair failed and we were unable to recover it. 00:35:53.612 [2024-10-28 05:11:43.941445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.941474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.941641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.941668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.941823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.941852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.942042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.942071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.942192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.942221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.942382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.942408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.942591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.942619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.942789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.942815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.942946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.942972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.943100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.943126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.943258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.943283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.943420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.943449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.943595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.943623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.943788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.943813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.943969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.943998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.944150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.944178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.944360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.944393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.944526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.944554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.944723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.944768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.944930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.944955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.945074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.945100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.945264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.945290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.945398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.945441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.945625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.945660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.945809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.945838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.945965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.945991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.946157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.946183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.946302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.946328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.946489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.946518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.946649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.946675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.946819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.946845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.947011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.947040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.947183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.947224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.947388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.947413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.947578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.947621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.947766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.947792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.947955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.947999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.948158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.948183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.948365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.948394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.948525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.948553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.613 qpair failed and we were unable to recover it. 00:35:53.613 [2024-10-28 05:11:43.948732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.613 [2024-10-28 05:11:43.948762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.948949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.948975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.949135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.949163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.949324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.949354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.949489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.949515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.949690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.949716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.949879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.949904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.950079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.950105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.950234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.950259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.950398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.950423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.950592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.950620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.950769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.950795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.950930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.950973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.951133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.951159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.951335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.951363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.951515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.951543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.951690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.951720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.951908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.951933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.952091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.952120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.952294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.952320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.952455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.952480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.952592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.952617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.952766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.952791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.952902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.952951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.953106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.953135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.953265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.953292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.953458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.953502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.953654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.953683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.953836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.953866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.953987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.954013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.954153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.954183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.954325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.954354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.954529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.954558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.954720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.954746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.954879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.954905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.955108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.955137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.955284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.955313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.955499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.955525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.955622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.955672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.955863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.955889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.956053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.614 [2024-10-28 05:11:43.956079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.614 qpair failed and we were unable to recover it. 00:35:53.614 [2024-10-28 05:11:43.956215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.956241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.956355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.956396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.956531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.956561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.956685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.956714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.956879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.956904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.957011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.957036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.957208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.957250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.957389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.957415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.957529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.957556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.957665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.957692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.957894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.957923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.958078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.958107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.958263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.958289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.958467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.958495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.958615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.958650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.958825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.958854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.959041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.959067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.959217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.959243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.959425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.959454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.959631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.959666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.959796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.959823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.959965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.959991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.960100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.960126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.960288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.960317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.960440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.960466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.960608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.960650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.960763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.960788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.960943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.960971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.961128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.961154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.961262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.961288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.961436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.961462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.961601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.961630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.961762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.961788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.961918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.961944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.962150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.962179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.962290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.962318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.962502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.962528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.962730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.962757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.962901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.962927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.963092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.615 [2024-10-28 05:11:43.963122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.615 qpair failed and we were unable to recover it. 00:35:53.615 [2024-10-28 05:11:43.963309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.963335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.963463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.963505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.963688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.963718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.963864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.963893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.964057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.964084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.964271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.964301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.964431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.964460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.964614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.964649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.964804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.964830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.964973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.964998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.965115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.965141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.965284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.965311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.965487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.965513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.965682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.965713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.965908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.965934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.966038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.966064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.966228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.966254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.966424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.966459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.966656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.966685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.966800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.966826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.966966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.966993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.967137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.967181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.967341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.967370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.967547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.967576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.967741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.967768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.967878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.967905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.968049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.968080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.968233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.968262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.968461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.968489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.968655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.968683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.968803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.968828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.969019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.969048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.969200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.969228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.969364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.969392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.969554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.969582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.969706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.616 [2024-10-28 05:11:43.969733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.616 qpair failed and we were unable to recover it. 00:35:53.616 [2024-10-28 05:11:43.969876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.969902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.970057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.970086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.970266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.970295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.970430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.970456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.970596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.970623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.970796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.970823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.970945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.970971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.971078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.971109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.971252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.971284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.971436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.971464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.971657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.971685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.971830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.971856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.972035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.972061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.972204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.972230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.972373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.972400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.972568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.972597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.972776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.972803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.972954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.972984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.973128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.973157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.973333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.973363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.973523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.973549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.973684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.973713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.973838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.973865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.974016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.974048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.974189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.974215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.974390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.974418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.974604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.974630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.974785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.974811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.974948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.974975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.975085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.975129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.975319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.975349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.975502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.975531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.975685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.975711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.975867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.975897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.976051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.976081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.976231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.976261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.976426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.976453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.976609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.976647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.976779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.976805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.976988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.977018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.617 [2024-10-28 05:11:43.977187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.617 [2024-10-28 05:11:43.977213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.617 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.977400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.977432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.977614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.977651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.977805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.977834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.977960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.977986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.978098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.978128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.978327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.978354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.978492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.978535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.978723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.978750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.978937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.978966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.979160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.979189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.979309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.979336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.979509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.979536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.979646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.979673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.979842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.979872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.980024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.980053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.980179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.980206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.980348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.980375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.980547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.980578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.980745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.980772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.980909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.980936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.981079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.981106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.981213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.981242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.981413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.981442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.981606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.981632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.981746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.981773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.981932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.981961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.982111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.982143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.982308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.982335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.982451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.982494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.982624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.982661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.982821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.982847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.982986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.983012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.983179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.983224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.983376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.983409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.983575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.983604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.983747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.983779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.983923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.983950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.984119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.984145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.984283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.984326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.984488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.618 [2024-10-28 05:11:43.984514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.618 qpair failed and we were unable to recover it. 00:35:53.618 [2024-10-28 05:11:43.984662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.984689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.984835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.984878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.985055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.985086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.985250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.985279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.985424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.985450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.985572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.985599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.985741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.985769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.985877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.985903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.986020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.986046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.986195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.986222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.986393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.986419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.986563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.986589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.986698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.986724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.986845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.986871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.987030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.987058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.987189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.987214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.987379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.987406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.987545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.987575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.987711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.987738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.987891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.987917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.988025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.988068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.988198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.988227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.988382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.988412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.988577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.988603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.988755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.988783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.988904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.988931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.989069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.989096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.989300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.989326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.989444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.989470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.989609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.989643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.989776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.989803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.989912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.989938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.990060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.990086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.990267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.990296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.990455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.990481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.990648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.990675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.990846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.990872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.991088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.991115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.991256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.991285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.619 [2024-10-28 05:11:43.991447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.619 [2024-10-28 05:11:43.991473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.619 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.991628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.991663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.991793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.991824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.991983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.992012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.992202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.992229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.992343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.992387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.992543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.992573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.992731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.992758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.992875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.992901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.993048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.993075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.993258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.993289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.993424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.993450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.993617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.993649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.993809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.993841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.993994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.994023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.994189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.994215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.994381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.994410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.994523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.994567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.994727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.994757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.994924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.994950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.995082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.995108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.995243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.995269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.995425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.995466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.995611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.995644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.995793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.995821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.995965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.995991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.996107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.996133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.996265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.996295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.996456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.996482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.996616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.996664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.996848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.996875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.997019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.997045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.997191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.997217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.997333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.997377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.997567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.997596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.997754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.997784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.620 [2024-10-28 05:11:43.997941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.620 [2024-10-28 05:11:43.997967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.620 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:43.998108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:43.998152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:43.998274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:43.998307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:43.998486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:43.998515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:43.998700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:43.998727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:43.998865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:43.998892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:43.999033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:43.999059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:43.999207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:43.999236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:43.999369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:43.999395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:43.999541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:43.999568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:43.999729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:43.999758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:43.999877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:43.999905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.000069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.000096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.000231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.000257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.000422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.000451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.000640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.000671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.000805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.000832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.000946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.000972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.001139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.001168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.001310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.001339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.001521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.001547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.001700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.001730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.001882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.001912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.002061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.002090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.002232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.002259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.002397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.002423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.002562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.002588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.002779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.002807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.002944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.002971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.003159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.003189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.003381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.003408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.003573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.003600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.003723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.003751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.003854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.003880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.004041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.004070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.004238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.004267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.004429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.004459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.004616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.004656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.004803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.004832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.005008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.005037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.621 qpair failed and we were unable to recover it. 00:35:53.621 [2024-10-28 05:11:44.005161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.621 [2024-10-28 05:11:44.005188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.005323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.005349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.005553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.005584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.005720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.005747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.005893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.005920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.006062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.006088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.006298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.006324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.006475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.006519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.006654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.006685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.006822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.006848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.007059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.007086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.007228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.007255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.007433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.007460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.007598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.007658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.007835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.007864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.007980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.008010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.008168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.008195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.008377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.008407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.008545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.008572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.008715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.008742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.008890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.008916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.009039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.009065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.009284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.009315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.009436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.009466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.009616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.009651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.009801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.009830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.010010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.010039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.010217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.010248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.010409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.010439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.010574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.010608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.010814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.010844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.010975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.011001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.011119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.011147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.011288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.011314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.011450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.011475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.011609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.011643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.011777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.011803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.011988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.012017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.012200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.012233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.012365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.012394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.622 [2024-10-28 05:11:44.012555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.622 [2024-10-28 05:11:44.012582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.622 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.012725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.012771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.012923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.012953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.013082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.013112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.013266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.013293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.013409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.013435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.013598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.013624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.013790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.013821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.013984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.014010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.014152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.014178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.014350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.014377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.014552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.014579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.014719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.014746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.014900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.014929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.015073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.015102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.015259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.015288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.015447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.015473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.015599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.015625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.015773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.015800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.015968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.016011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.016177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.016203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.016311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.016337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.016494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.016523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.016688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.016716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.016859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.016886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.017019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.017045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.017179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.017205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.017397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.017426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.017566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.017593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.017775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.017805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.017967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.017996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.018148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.018177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.018364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.018391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.018546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.018577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.018772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.018799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.018927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.018953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.019128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.019157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.019310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.019338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.019518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.019548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.019678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.019709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.019899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.623 [2024-10-28 05:11:44.019925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.623 qpair failed and we were unable to recover it. 00:35:53.623 [2024-10-28 05:11:44.020078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.020107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.020264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.020293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.020442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.020472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.020659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.020687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.020839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.020869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.020994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.021024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.021210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.021239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.021427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.021454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.021598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.021624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.021752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.021779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.021973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.022002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.022140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.022166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.022284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.022310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.022447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.022473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.022662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.022692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.022850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.022877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.023042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.023090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.023235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.023264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.023412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.023441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.023630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.023662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.023776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.023802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.023928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.023956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.024100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.024129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.024267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.024293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.024397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.024422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.024552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.024581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.024739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.024769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.024925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.024951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.025068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.025094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.025232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.025262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.025425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.025455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.025618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.025652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.025764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.025808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.025980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.026006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.026172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.026198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.026374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.624 [2024-10-28 05:11:44.026399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.624 qpair failed and we were unable to recover it. 00:35:53.624 [2024-10-28 05:11:44.026545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.026591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.026787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.026814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.026954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.026980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.027146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.027172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.027280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.027306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.027476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.027518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.027675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.027705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.027841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.027872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.028043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.028069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.028207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.028237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.028386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.028416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.028582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.028608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.028724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.028769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.028921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.028951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.029131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.029160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.029320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.029346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.029459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.029486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.029649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.029678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.029827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.029869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.030040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.030066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.030222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.030253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.030412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.030443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.030596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.030622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.030768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.030794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.030929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.030955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.031085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.031111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.031251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.031295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.031431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.031457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.031589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.031615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.031822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.031848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.031990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.032037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.032226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.032252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.032407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.032436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.032569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.032602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.032803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.032834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.033011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.033038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.033176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.033204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.033413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.033439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.033574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.033600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.625 [2024-10-28 05:11:44.033763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.625 [2024-10-28 05:11:44.033790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.625 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.033930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.033959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.034089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.034119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.034271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.034301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.034465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.034491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.034599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.034626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.034778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.034803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.034944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.034974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.035134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.035164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.035356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.035386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.035551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.035578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.035718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.035745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.035910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.035937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.036072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.036101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.036291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.036320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.036463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.036492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.036659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.036686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.036817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.036843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.037064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.037093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.037273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.037302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.037430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.037456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.037564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.037589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.037786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.037812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.037984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.038010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.038144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.038170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.038352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.038380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.038536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.038565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.038734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.038762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.038870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.038899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.039038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.039065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.039170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.039197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.039361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.039390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.039552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.039578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.039696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.039740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.039889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.039918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.040067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.040097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.040253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.040281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.040462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.040492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.040615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.040652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.040809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.040837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.040968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.626 [2024-10-28 05:11:44.040999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.626 qpair failed and we were unable to recover it. 00:35:53.626 [2024-10-28 05:11:44.041113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.041139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.041300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.041329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.041504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.041533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.041704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.041731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.041876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.041903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.042046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.042074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.042267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.042296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.042450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.042477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.042613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.042644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.042773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.042800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.042941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.042967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.043113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.043140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.043247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.043274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.043453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.043482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.043665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.043695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.043831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.043859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.044001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.044027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.044195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.044225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.044383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.044413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.044608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.044644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.044829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.044859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.045011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.045040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.045187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.045228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.045392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.045419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.045527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.045559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.045756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.045787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.045940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.045969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.046129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.046156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.046326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.046352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.046516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.046564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.046703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.046733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.046879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.046908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.047077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.047121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.047277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.047305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.047434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.047464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.047646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.047690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.047812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.047839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.047971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.047999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.048139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.048169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.048300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.627 [2024-10-28 05:11:44.048326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.627 qpair failed and we were unable to recover it. 00:35:53.627 [2024-10-28 05:11:44.048460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.048487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.048686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.048713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.048895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.048924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.049087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.049114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.049255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.049282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.049451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.049480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.049660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.049690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.049855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.049881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.050046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.050072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.050237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.050287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.050440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.050470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.050610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.050646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.050775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.050801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.050959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.051000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.051163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.051190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.051334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.051360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.051504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.051531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.051681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.051708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.051826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.051852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.051991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.052017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.052199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.052231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.052385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.052418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.052574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.052608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.052756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.052782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.052925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.052951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.053087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.053113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.053313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.053339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.053481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.053508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.053625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.053678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.053838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.053864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.053981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.054007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.054170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.054196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.054381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.054411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.054560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.054590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.054720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.054750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.054935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.054962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.055093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.055119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.055302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.055331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.055449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.055479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.055614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.055647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.628 qpair failed and we were unable to recover it. 00:35:53.628 [2024-10-28 05:11:44.055823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.628 [2024-10-28 05:11:44.055849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.056017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.056058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.056199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.056226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.056369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.056396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.056500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.056529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.056668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.056695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.056862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.056893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.057067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.057093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.057213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.057259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.057438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.057468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.057651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.057681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.057823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.057850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.058033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.058063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.058231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.058258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.058397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.058423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.058546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.058573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.058747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.058791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.058981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.059010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.059125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.059151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.059288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.059314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.059450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.059476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.059611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.059672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.059829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.059858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.060061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.060089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.060223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.060252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.060422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.060448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.060564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.060590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.060742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.060769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.060925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.060955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.061075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.061109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.061271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.061298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.061436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.061462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.061620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.061660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.061836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.061865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.062015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.062044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.062233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.062260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.629 qpair failed and we were unable to recover it. 00:35:53.629 [2024-10-28 05:11:44.062371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.629 [2024-10-28 05:11:44.062397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.062573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.062604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.062746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.062772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.062905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.062931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.063040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.063067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.063218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.063247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.063373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.063402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.063543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.063569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.063687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.063715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.063843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.063874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.064034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.064066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.064231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.064258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.064392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.064419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.064552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.064582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.064722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.064752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.064917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.064945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.065081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.065111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.065253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.065279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.065394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.065420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.065533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.065559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.065670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.065697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.065896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.065925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.066062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.066088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.066282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.066312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.066494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.066524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.066652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.066681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.066828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.066857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.067018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.067045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.067177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.067222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.067348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.067377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.067528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.067557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.067712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.067740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.067878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.067921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.068053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.068082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.068274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.068300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.068436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.068465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.068624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.068664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.068847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.068876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.069025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.069056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.069208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.069235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.630 [2024-10-28 05:11:44.069350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.630 [2024-10-28 05:11:44.069376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.630 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.069514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.069540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.069677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.069707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.069860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.069887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.070050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.070077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.070192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.070218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.070383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.070410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.070512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.070541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.070681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.070708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.070844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.070871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.071048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.071074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.071213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.071239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.071397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.071426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.071576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.071607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.071775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.071804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.071985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.072017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.072133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.072175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.072352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.072380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.072529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.072559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.072701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.072729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.072844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.072874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.073061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.073088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.073249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.073292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.073433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.073458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.073589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.073616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.073764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.073791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.073951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.073981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.074167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.074193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.074354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.074383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.074576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.074603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.074795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.074828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.075019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.075046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.075208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.075237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.075351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.075379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.075556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.075584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.075765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.075792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.075934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.075961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.076075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.076104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.076218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.076244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.076384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.631 [2024-10-28 05:11:44.076411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.631 qpair failed and we were unable to recover it. 00:35:53.631 [2024-10-28 05:11:44.076605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.076644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.076803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.076834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.076981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.077010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.077181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.077208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.077343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.077370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.077532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.077562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.077697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.077727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.077923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.077951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.078103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.078136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.078324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.078354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.078471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.078501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.078670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.078697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.078835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.078862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.079063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.079093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.079251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.079280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.079443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.079469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.079612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.079649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.079787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.079814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.079980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.080009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.080147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.080173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.080291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.080327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.080459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.080488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.080645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.080672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.080808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.080833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.081001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.081030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.081173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.081203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.081359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.081389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.081555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.081582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.081725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.081751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.081886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.081914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.082081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.082110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.082251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.082277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.082410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.082436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.082571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.082600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.082770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.082803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.082968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.082996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.083130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.083176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.083332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.083362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.083508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.083537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.083701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.083728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.632 qpair failed and we were unable to recover it. 00:35:53.632 [2024-10-28 05:11:44.083848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.632 [2024-10-28 05:11:44.083875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.084083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.084109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.084271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.084297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.084437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.084467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.084631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.084668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.084844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.084874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.085050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.085080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.085212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.085239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.085355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.085381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.085546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.085577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.085733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.085763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.085906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.085932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.086062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.086108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.086239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.086268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.086418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.086447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.086580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.086607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.086761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.086788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.086971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.087016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.087195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.087224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.087358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.087385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.087566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.087595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.087766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.087793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.087933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.087960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.088098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.088124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.088264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.088308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.088465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.088494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.088648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.088678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.088829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.088856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.089028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.089073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.089264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.089294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.089438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.089472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.089605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.089632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.089774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.089801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.090004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.090031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.090177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.090204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.090344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.633 [2024-10-28 05:11:44.090370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.633 qpair failed and we were unable to recover it. 00:35:53.633 [2024-10-28 05:11:44.090518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.090544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.090682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.090709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.090865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.090893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.091054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.091080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.091263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.091293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.091465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.091495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.091690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.091717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.091827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.091854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.091993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.092020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.092183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.092209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.092381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.092407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.092515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.092542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.092658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.092685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.092847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.092890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.093029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.093056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.093199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.093228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.093372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.093399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.093562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.093591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.093728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.093758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.093940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.093967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.094109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.094136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.094303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.094329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.094467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.094496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.094683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.094709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.094869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.094906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.095020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.095050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.095174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.095202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.095390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.095416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.095526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.095552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.095686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.095712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.095872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.095917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.096070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.096096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.096217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.096243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.096365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.096391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.096555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.096582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.096740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.096767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.096871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.096908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.097080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.097107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.097245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.097271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.097384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.097410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.634 qpair failed and we were unable to recover it. 00:35:53.634 [2024-10-28 05:11:44.097594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.634 [2024-10-28 05:11:44.097623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.097783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.097812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.097964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.097993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.098131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.098159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.098300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.098326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.098448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.098477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.098660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.098699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.098833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.098859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.098998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad1330 is same with the state(6) to be set 00:35:53.635 [2024-10-28 05:11:44.099211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.099251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.099456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.099493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.099682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.099714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.099840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.099884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.100025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.100055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.100237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.100263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.100400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.100426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.100568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.100610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.100753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.100781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.100920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.100964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.101115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.101143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.101330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.101356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.101500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.101529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.101689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.101719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.101853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.101881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.102029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.102055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.102199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.102225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.102364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.102390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.102533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.102575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.102765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.102792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.102936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.102962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.103075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.103117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.103270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.103299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.103457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.103484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.103625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.103676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.103811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.103840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.103967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.103994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.104128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.104158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.104337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.104363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.104509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.104536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.104726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.104756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.104914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.635 [2024-10-28 05:11:44.104941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.635 qpair failed and we were unable to recover it. 00:35:53.635 [2024-10-28 05:11:44.105105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.105132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.105288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.105317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.105467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.105497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.105680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.105706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.105821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.105846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.105980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.106007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.106138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.106166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.106311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.106354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.106479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.106508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.106650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.106679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.106862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.106890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.107021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.107050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.107235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.107262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.107412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.107442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.107617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.107668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.107841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.107866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.107981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.108008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.108149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.108178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.108346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.108374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.108516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.108544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.108671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.108698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.108853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.108878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.109043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.109074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.109188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.109215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.109386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.109415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.109552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.109597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.109763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.109789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.109901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.109928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.110102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.110145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.110292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.110322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.110451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.110478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.110617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.110651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.110832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.110859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.110998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.111024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.111132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.111158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.111291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.111322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.111481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.111510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.111656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.111702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.111832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.111861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.111996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.112022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.636 [2024-10-28 05:11:44.112163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.636 [2024-10-28 05:11:44.112189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.636 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.112348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.112374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.112541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.112568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.112779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.112806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.112943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.112969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.113138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.113165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.113320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.113351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.113519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.113545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.113689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.113716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.113860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.113891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.113996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.114022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.114189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.114216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.114359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.114385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.114555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.114583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.114750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.114776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.114933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.114961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.115115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.115144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.115297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.115323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.115466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.115492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.115608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.115642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.115819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.115845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.116000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.116029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.116172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.116204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.116394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.116422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.116580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.116609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.116770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.116801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.116967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.116993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.117130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.117175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.117361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.117390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.117549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.117575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.117718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.117745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.117889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.117932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.118121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.118147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.118310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.118337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.118498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.118524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.118668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.118695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.118831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.118857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.119048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.119078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.637 [2024-10-28 05:11:44.119231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.637 [2024-10-28 05:11:44.119257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.637 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.119396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.119442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.119610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.119645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.119788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.119814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.119982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.120009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.120163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.120194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.120382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.120409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.120573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.120600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.120771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.120814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.121006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.121033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.121166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.121192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.121328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.121356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.121498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.121528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.121647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.121692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.121846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.121875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.122030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.122056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.122203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.122229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.122387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.122416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.122575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.122602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.122795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.122825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.122982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.123012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.123178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.123204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.123361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.123392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.123546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.123574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.123747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.123777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.123892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.123920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.124087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.124116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.124272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.124298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.124482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.124511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.124703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.124729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.124836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.124863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.125006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.125032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.125194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.125224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.125412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.125438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.125572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.125598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.125774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.125804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.125947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.125973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.126117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.126160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.126287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.638 [2024-10-28 05:11:44.126317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.638 qpair failed and we were unable to recover it. 00:35:53.638 [2024-10-28 05:11:44.126481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.126514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.126658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.126704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.126884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.126913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.127070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.127097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.127280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.127310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.127423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.127452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.127614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.127655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.127762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.127789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.127962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.127988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.128155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.128182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.128346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.128376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.128565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.128591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.128765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.128792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.128973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.129002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.129130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.129159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.129321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.129347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.129459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.129485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.129650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.129677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.129855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.129881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.129994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.130036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.130187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.130215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.130402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.130428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.130577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.130603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.130750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.130777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.130918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.130945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.131085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.131111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.131301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.131330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.131486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.131516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.131702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.131731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.131883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.131913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.132071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.132109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.132244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.132270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.132456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.132485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.132674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.132701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.132858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.132887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.133038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.133068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.133229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.133256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.133424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.133451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.133592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.133642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.133799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.639 [2024-10-28 05:11:44.133825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.639 qpair failed and we were unable to recover it. 00:35:53.639 [2024-10-28 05:11:44.133958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.133984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.134134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.134160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.134306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.134333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.134502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.134547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.134698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.134728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.134860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.134890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.135032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.135057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.135225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.135255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.135447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.135473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.135627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.135674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.135826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.135855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.135990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.136021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.136168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.136194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.136334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.136360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.136523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.136550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.136712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.136742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.136935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.136962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.137124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.137150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.137305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.137335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.137516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.137549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.137716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.137743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.137900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.137929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.138082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.138110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.138269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.138295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.138412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.138437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.138580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.138606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.138816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.138845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.139026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.139056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.139186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.139216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.139381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.139408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.139567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.139593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.139764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.139796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.139959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.139988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.140132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.140161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.140280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.140306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.140466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.140492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.140625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.140660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.140841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.140866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.140998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.141024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.141172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.141198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.640 [2024-10-28 05:11:44.141314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.640 [2024-10-28 05:11:44.141356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.640 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.141466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.141493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.141666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.141694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.141880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.141909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.142043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.142070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.142186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.142212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.142377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.142404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.142569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.142595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.142776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.142803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.142943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.142969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.143089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.143117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.143260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.143306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.143453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.143484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.143612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.143651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.143767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.143792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.143950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.143984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.144171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.144197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.144350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.144379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.144554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.144580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.144718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.144746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.144905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.144935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.145045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.145074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.145239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.145265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.145445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.145475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.145623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.145662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.145849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.145876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.146070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.146100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.146231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.146261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.146425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.146452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.146615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.146652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.146836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.146865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.147021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.147047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.147213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.147240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.147400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.147443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.147598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.147625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.147779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.147805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.147948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.147974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.148112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.148138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.148298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.148327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.148457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.641 [2024-10-28 05:11:44.148484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.641 qpair failed and we were unable to recover it. 00:35:53.641 [2024-10-28 05:11:44.148621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.148668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.148825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.148853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.149008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.149043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.149205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.149231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.149415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.149444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.149572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.149601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.149763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.149790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.149930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.149957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.150101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.150128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.150301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.150327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.150463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.150492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.150632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.150677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.150821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.150848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.150981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.151008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.151143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.151168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.151341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.151367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.151513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.151559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.151723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.151752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.151888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.151914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.152024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.152050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.152191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.152217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.152383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.152410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.152550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.152576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.152719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.152745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.152919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.152945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.153081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.153108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.153264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.153293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.153429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.153455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.153589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.153615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.153772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.153807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.153948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.153975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.154087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.154113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.154216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.154241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.154348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.154374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.154514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.154540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.154742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.154769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.154936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.154962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.155108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.155135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.642 qpair failed and we were unable to recover it. 00:35:53.642 [2024-10-28 05:11:44.155252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.642 [2024-10-28 05:11:44.155279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.155421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.155448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.155562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.155589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.155730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.155756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.155889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.155915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.156069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.156106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.156311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.156346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.156513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.156550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.156759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.156789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.156944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.156987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.157154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.157180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.157369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.157398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.157553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.157583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.157719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.157745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.157913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.157939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.158046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.158072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.158193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.158219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.158357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.158384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.158514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.158541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.158663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.158690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.158835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.158880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.159027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.159057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.159190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.159217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.159352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.159379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.159554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.159597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.159768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.159794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.159934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.159960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.160129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.160159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.643 [2024-10-28 05:11:44.160324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.643 [2024-10-28 05:11:44.160350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.643 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.160485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.160512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.160616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.160652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.160794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.160820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.160963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.160990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.161123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.161149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.161293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.161319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.161459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.161486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.161658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.161685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.161808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.161835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.161976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.162002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.162137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.162163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.162305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.162331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.162492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.162520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.162701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.162731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.162893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.162919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.163059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.163101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.163246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.163276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.163415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.163442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.163550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.163577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.163740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.163772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.163923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.163950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.164096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.164143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.164271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.164299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.164484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.164510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.164705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.164735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.164859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.164888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.165023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.165049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.165222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.165248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.165369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.165396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.165509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.165535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.165685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.165716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.165854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.165885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.166054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.166080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.166242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.166268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.166473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.166500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.166647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.166674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.166844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.166887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.644 [2024-10-28 05:11:44.167031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.644 [2024-10-28 05:11:44.167061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.644 qpair failed and we were unable to recover it. 00:35:53.928 [2024-10-28 05:11:44.167186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.928 [2024-10-28 05:11:44.167212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.928 qpair failed and we were unable to recover it. 00:35:53.928 [2024-10-28 05:11:44.167380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.928 [2024-10-28 05:11:44.167407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.928 qpair failed and we were unable to recover it. 00:35:53.928 [2024-10-28 05:11:44.167519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.928 [2024-10-28 05:11:44.167545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.928 qpair failed and we were unable to recover it. 00:35:53.928 [2024-10-28 05:11:44.167684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.928 [2024-10-28 05:11:44.167710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.928 qpair failed and we were unable to recover it. 00:35:53.928 [2024-10-28 05:11:44.167821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.928 [2024-10-28 05:11:44.167847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.928 qpair failed and we were unable to recover it. 00:35:53.928 [2024-10-28 05:11:44.167950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.928 [2024-10-28 05:11:44.167976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.928 qpair failed and we were unable to recover it. 00:35:53.928 [2024-10-28 05:11:44.168144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.928 [2024-10-28 05:11:44.168171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.928 qpair failed and we were unable to recover it. 00:35:53.928 [2024-10-28 05:11:44.168315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.928 [2024-10-28 05:11:44.168342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.928 qpair failed and we were unable to recover it. 00:35:53.928 [2024-10-28 05:11:44.168457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.928 [2024-10-28 05:11:44.168483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.928 qpair failed and we were unable to recover it. 00:35:53.928 [2024-10-28 05:11:44.168654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.928 [2024-10-28 05:11:44.168681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.928 qpair failed and we were unable to recover it. 00:35:53.928 [2024-10-28 05:11:44.168789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.928 [2024-10-28 05:11:44.168816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.928 qpair failed and we were unable to recover it. 00:35:53.928 [2024-10-28 05:11:44.168958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.928 [2024-10-28 05:11:44.168985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.928 qpair failed and we were unable to recover it. 00:35:53.928 [2024-10-28 05:11:44.169135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.928 [2024-10-28 05:11:44.169162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.928 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.169307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.169334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.169444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.169470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.169607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.169641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.169785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.169812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.169959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.169985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.170126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.170153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.170293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.170323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.170455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.170481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.170613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.170646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.170783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.170810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.170946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.170971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.171084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.171110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.171226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.171254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.171426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.171453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.171594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.171621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.171742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.171769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.171885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.171912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.172075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.172101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.172202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.172228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.172367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.172394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.172502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.172528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.172661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.172688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.172845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.172874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.173036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.173062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.173198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.173242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.173409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.173435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.173544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.173570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.173728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.173755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.173938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.173967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.174130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.174157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.174297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.174325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.174480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.174509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.174664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.174690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.174835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.174861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.175014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.175040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.175153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.175180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.175320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.175363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.175516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.175545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.929 [2024-10-28 05:11:44.175677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.929 [2024-10-28 05:11:44.175705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.929 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.175849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.175893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.176048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.176077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.176243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.176270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.176405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.176431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.176591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.176642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.176761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.176788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.176925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.176952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.177141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.177171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.177364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.177390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.177503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.177530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.177668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.177695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.177841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.177867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.178035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.178079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.178230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.178260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.178448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.178475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.178616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.178650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.178798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.178841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.179007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.179033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.179173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.179200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.179365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.179394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.179526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.179554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.179719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.179746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.179891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.179934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.180071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.180097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.180214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.180240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.180381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.180409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.180586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.180613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.180772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.180808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.180972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.181008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.181194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.181225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.181398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.181428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.181572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.181601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.181773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.181800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.181943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.181970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.182133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.182178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.182345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.182371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.182488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.182516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.182693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.182738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.182873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.182900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.930 qpair failed and we were unable to recover it. 00:35:53.930 [2024-10-28 05:11:44.183005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.930 [2024-10-28 05:11:44.183032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.183190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.183219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.183381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.183407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.183585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.183614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.183747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.183777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.183908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.183934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.184100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.184144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.184303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.184329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.184437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.184463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.184602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.184628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.184802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.184829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.184946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.184972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.185119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.185145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.185313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.185341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.185502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.185528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.185680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.185711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.185835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.185864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.186024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.186051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.186192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.186237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.186359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.186388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.186553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.186579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.186758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.186787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.186947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.186978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.187166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.187196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.187360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.187404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.187561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.187587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.187731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.187759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.187871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.187913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.188034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.188064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.188213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.188240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.188404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.188448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.188573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.188620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.188745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.188771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.188935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.188981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.189132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.189161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.189346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.189372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.189490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.189516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.189675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.189702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.189840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.189867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.190019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.931 [2024-10-28 05:11:44.190048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.931 qpair failed and we were unable to recover it. 00:35:53.931 [2024-10-28 05:11:44.190228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.190258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.190420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.190446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.190631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.190669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.190788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.190817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.190978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.191004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.191146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.191189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.191335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.191363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.191501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.191527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.191691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.191734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.191883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.191913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.192051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.192084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.192225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.192251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.192387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.192413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.192550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.192576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.192743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.192770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.192933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.192961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.193115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.193141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.193254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.193282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.193452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.193481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.193613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.193646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.193815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.193843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.193973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.193999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.194138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.194164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.194315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.194343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.194507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.194536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.194704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.194731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.194867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.194893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.195078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.195106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.195264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.195290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.195432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.195475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.195646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.195676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.195844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.195872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.196034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.196065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.196221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.196251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.196386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.196413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.196576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.196602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.196750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.196777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.196916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.196942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.197061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.197087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.932 [2024-10-28 05:11:44.197218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.932 [2024-10-28 05:11:44.197243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.932 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.197378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.197403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.197594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.197624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.197794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.197824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.197987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.198014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.198181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.198224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.198373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.198402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.198576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.198602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.198726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.198753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.198872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.198898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.199099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.199125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.199293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.199337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.199470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.199500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.199659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.199686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.199866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.199896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.200074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.200103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.200234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.200261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.200429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.200475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.200659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.200687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.200808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.200835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.200982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.201008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.201147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.201188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.201354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.201380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.201568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.201597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.201785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.201815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.201947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.201973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.202106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.202133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.202301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.202344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.202502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.202528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.202692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.202718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.202890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.202917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.203056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.203082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.203250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.933 [2024-10-28 05:11:44.203280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.933 qpair failed and we were unable to recover it. 00:35:53.933 [2024-10-28 05:11:44.203441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.203467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.203611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.203644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.203762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.203788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.203895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.203921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.204058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.204084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.204243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.204271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.204431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.204465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.204625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.204660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.204792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.204819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.204951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.204980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.205141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.205170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.205311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.205339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.205504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.205532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.205720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.205747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.205930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.205960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.206116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.206142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.206280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.206306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.206466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.206496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.206674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.206704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.206868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.206894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.207020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.207046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.207183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.207209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.207415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.207440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.207569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.207598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.207799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.207827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.207967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.207997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.208148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.208178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.208321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.208363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.208525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.208552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.208701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.208731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.208890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.208917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.209058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.209085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.209227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.209254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.209388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.209419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.209601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.209627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.209836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.209866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.209987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.210017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.210201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.210228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.210339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.210365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.934 [2024-10-28 05:11:44.210508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.934 [2024-10-28 05:11:44.210535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.934 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.210687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.210714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.210876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.210906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.211058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.211087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.211221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.211247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.211390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.211416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.211585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.211615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.211759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.211787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.211910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.211936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.212099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.212129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.212293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.212320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.212484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.212528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.212695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.212726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.212850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.212875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.213005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.213032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.213198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.213226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.213359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.213385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.213519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.213547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.213689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.213717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.213882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.213909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.214057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.214086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.214239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.214273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.214436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.214464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.214622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.214660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.214814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.214843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.214998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.215024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.215135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.215162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.215336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.215379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.215543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.215570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.215724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.215754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.215943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.215969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.216080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.216106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.216215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.216241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.216416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.216441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.216607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.216639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.216799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.216829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.216953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.216982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.217114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.217140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.217249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.217275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.217431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.217460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.935 qpair failed and we were unable to recover it. 00:35:53.935 [2024-10-28 05:11:44.217623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.935 [2024-10-28 05:11:44.217657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.217767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.217793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.217930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.217957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.218148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.218174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.218327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.218356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.218501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.218532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.218704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.218731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.218881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.218911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.219038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.219067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.219225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.219251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.219391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.219434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.219578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.219607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.219754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.219781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.219921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.219964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.220088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.220118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.220310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.220336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.220520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.220550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.220700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.220730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.220893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.220919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.221025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.221051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.221219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.221248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.221383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.221411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.221581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.221608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.221800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.221827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.221961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.221987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.222170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.222199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.222351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.222381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.222537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.222563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.222678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.222708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.222885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.222915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.223078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.223104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.223220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.223267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.223382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.223412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.223550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.223577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.223718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.223761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.223890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.223919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.224087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.224114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.224278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.224304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.224474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.224504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.224692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.224719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.936 qpair failed and we were unable to recover it. 00:35:53.936 [2024-10-28 05:11:44.224877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.936 [2024-10-28 05:11:44.224907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.225055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.225085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.225227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.225253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.225420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.225447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.225604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.225644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.225816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.225844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.226029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.226058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.226234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.226263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.226396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.226423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.226563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.226594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.226749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.226795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.226931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.226958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.227097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.227123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.227263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.227289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.227468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.227495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.227611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.227643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.227813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.227839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.227977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.228002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.228129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.228175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.228365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.228394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.228582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.228609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.228755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.228782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.228954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.228980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.229103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.229130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.229244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.229271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.229411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.229440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.229624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.229674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.229821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.229850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.229991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.230036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.230201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.230227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.230332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.230358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.230497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.230523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.230665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.230693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.230829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.230855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.230981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.231008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.937 [2024-10-28 05:11:44.231211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.937 [2024-10-28 05:11:44.231238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.937 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.231343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.231391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.231585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.231611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.231735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.231761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.231938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.231967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.232144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.232173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.232334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.232361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.232545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.232574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.232738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.232767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.232930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.232956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.233090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.233131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.233317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.233349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.233510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.233537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.233645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.233672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.233879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.233905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.234045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.234072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.234206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.234232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.234371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.234398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.234539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.234565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.234717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.234744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.234909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.234952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.235119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.235146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.235303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.235333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.235457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.235487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.235646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.235673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.235855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.235884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.236031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.236061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.236251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.236277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.236434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.236466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.236648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.236676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.236784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.236810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.236976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.237019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.237182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.237211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.237355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.237382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.237499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.237525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.237689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.237716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.237854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.237880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.237999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.238042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.238159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.238188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.238345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.238372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.938 [2024-10-28 05:11:44.238511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.938 [2024-10-28 05:11:44.238537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.938 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.238734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.238763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.238905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.238932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.239074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.239100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.239273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.239299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.239476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.239502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.239661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.239691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.239806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.239836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.240027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.240053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.240203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.240232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.240383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.240413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.240564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.240590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.240741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.240769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.240906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.240949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.241083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.241109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.241257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.241286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.241445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.241472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.241607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.241640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.241799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.241828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.241985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.242015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.242171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.242197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.242378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.242408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.242582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.242611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.242790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.242817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.242979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.243008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.243168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.243194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.243361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.243389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.243496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.243540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.243671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.243703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.243863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.243893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.244010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.244036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.244151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.244177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.244295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.244321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.244504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.244533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.244684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.244714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.244872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.244900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.245064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.245091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.245256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.245285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.245448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.245473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.245676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.245706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.939 qpair failed and we were unable to recover it. 00:35:53.939 [2024-10-28 05:11:44.245828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.939 [2024-10-28 05:11:44.245859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.246020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.246046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.246182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.246226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.246345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.246375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.246507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.246534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.246644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.246672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.246831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.246862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.247010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.247037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.247181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.247225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.247357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.247386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.247544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.247571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.247712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.247739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.247903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.247929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.248040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.248067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.248201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.248243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.248423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.248452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.248613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.248651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.248843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.248875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.249055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.249084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.249243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.249269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.249437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.249464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.249572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.249599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.249758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.249786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.249945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.249974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.250151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.250181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.250346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.250373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.250537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.250564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.250697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.250724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.250831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.250858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.250970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.250994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.251138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.251165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.251307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.251333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.251452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.251478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.251646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.251674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.251847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.251873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.252033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.252062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.252258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.252284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.252444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.252470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.252655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.252684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.252860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.252890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.940 qpair failed and we were unable to recover it. 00:35:53.940 [2024-10-28 05:11:44.253039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.940 [2024-10-28 05:11:44.253066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.253205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.253250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.253401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.253430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.253556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.253588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.253762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.253790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.253928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.253957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.254149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.254176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.254287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.254313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.254448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.254474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.254643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.254671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.254828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.254858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.255012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.255040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.255190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.255215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.255356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.255382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.255546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.255572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.255734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.255761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.255932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.255961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.256108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.256138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.256263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.256289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.256431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.256457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.256622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.256656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.256797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.256823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.257013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.257042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.257171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.257200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.257363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.257389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.257530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.257575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.257750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.257777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.257917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.257943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.258078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.258123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.258321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.258347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.258523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.258549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.258674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.258705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.258895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.258925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.259054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.259083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.259225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.259252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.259416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.941 [2024-10-28 05:11:44.259459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.941 qpair failed and we were unable to recover it. 00:35:53.941 [2024-10-28 05:11:44.259617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.259649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.259791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.259818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.260010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.260039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.260199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.260225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.260355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.260397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.260547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.260577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.260743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.260770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.260903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.260944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.261151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.261178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.261343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.261369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.261552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.261581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.261730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.261760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.261949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.261979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.262140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.262171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.262348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.262377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.262513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.262540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.262674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.262701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.262862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.262892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.263059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.263086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.263272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.263301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.263452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.263481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.263646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.263673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.263821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.263848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.264011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.264058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.264223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.264252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.264438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.264468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.264655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.264682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.264801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.264828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.264961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.264988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.265166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.265193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.265308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.265333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.265474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.265517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.265698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.265729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.265922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.265949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.266130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.266159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.266312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.266346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.266483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.266509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.266623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.266657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.942 [2024-10-28 05:11:44.266800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.942 [2024-10-28 05:11:44.266827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.942 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.266992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.267018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.267175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.267204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.267354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.267384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.267517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.267544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.267695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.267722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.267860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.267890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.268042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.268069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.268188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.268215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.268334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.268360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.268501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.268527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.268698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.268725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.268835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.268861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.269028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.269055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.269200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.269226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.269358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.269385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.269525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.269551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.269731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.269761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.269923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.269953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.270116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.270142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.270298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.270328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.270509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.270539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.270712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.270739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.270922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.270951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.271107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.271140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.271305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.271332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.271469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.271496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.271711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.271738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.271905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.271932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.272088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.272118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.272297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.272326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.272485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.272511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.272692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.272722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.272870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.272899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.273055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.273082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.273224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.273266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.273396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.273425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.273588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.273614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.273796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.273826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.273971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.274001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.943 qpair failed and we were unable to recover it. 00:35:53.943 [2024-10-28 05:11:44.274160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.943 [2024-10-28 05:11:44.274186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.274323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.274367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.274559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.274586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.274711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.274738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.274854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.274882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.275018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.275044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.275216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.275242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.275378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.275426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.275563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.275592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.275767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.275797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.275938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.275964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.276102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.276143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.276312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.276339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.276482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.276509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.276652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.276679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.276816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.276842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.276998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.277027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.277196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.277223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.277388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.277414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.277564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.277594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.277733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.277763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.277949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.277975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.278161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.278189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.278380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.278406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.278546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.278572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.278689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.278716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.278883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.278910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.279049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.279077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.279217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.279244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.279376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.279402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.279532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.279559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.279680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.279726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.279880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.279909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.280064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.280090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.280203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.280229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.280434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.280463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.280654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.280683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.280840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.280869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.280992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.281021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.281212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.281238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.944 qpair failed and we were unable to recover it. 00:35:53.944 [2024-10-28 05:11:44.281357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.944 [2024-10-28 05:11:44.281385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.281506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.281534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.281678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.281705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.281904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.281933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.282088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.282117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.282240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.282267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.282410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.282436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.282628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.282665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.282791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.282818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.282957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.282983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.283144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.283173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.283338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.283373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.283485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.283515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.283687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.283718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.283909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.283935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.284064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.284093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.284242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.284271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.284408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.284434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.284581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.284607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.284780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.284810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.284946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.284972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.285134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.285177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.285300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.285330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.285494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.285521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.285625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.285674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.285855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.285884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.286052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.286078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.286246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.286272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.286408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.286438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.286593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.286619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.286761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.286806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.286930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.286959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.287120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.287146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.287280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.287306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.287489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.287517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.287615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.287652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.287817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.287862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.287979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.288021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.288172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.288198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.288372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.288406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.945 qpair failed and we were unable to recover it. 00:35:53.945 [2024-10-28 05:11:44.288571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.945 [2024-10-28 05:11:44.288597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.288770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.288797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.288955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.288985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.289162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.289191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.289346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.289373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.289510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.289551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.289705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.289735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.289926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.289952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.290107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.290136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.290266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.290295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.290460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.290486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.290623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.290701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.290853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.290881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.291024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.291052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.291193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.291219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.291397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.291425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.291585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.291611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.291778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.291809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.291964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.291993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.292134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.292160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.292295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.292322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.292520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.292547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.292684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.292711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.292897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.292926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.293096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.293123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.293260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.293286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.293393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.293438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.293625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.293663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.293828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.293854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.294010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.294039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.294191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.294220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.294354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.294382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.294520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.294545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.294687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.294714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.294881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.946 [2024-10-28 05:11:44.294907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.946 qpair failed and we were unable to recover it. 00:35:53.946 [2024-10-28 05:11:44.295059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.295088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.295213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.295241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.295426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.295454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.295619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.295656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.295842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.295872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.296027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.296054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.296239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.296271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.296464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.296491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.296603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.296629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.296773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.296816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.296962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.297005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.297146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.297174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.297336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.297366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.297493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.297524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.297712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.297740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.297874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.297900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.298042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.298068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.298232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.298258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.298419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.298448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.298611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.298650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.298787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.298813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.298956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.298982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.299116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.299142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.299303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.299329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.299433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.299459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.299628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.299662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.299791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.299817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.299957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.299983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.300147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.300178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.300351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.300379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.300488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.300515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.300679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.300706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.300885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.300915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.301068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.301100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.301255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.301283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.301465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.301492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.301647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.301678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.301826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.301855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.302016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.302044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.302185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.947 [2024-10-28 05:11:44.302237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.947 qpair failed and we were unable to recover it. 00:35:53.947 [2024-10-28 05:11:44.302411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.302440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.302596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.302622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.302739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.302766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.302923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.302949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.303089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.303115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.303269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.303300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.303452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.303482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.303669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.303696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.303861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.303890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.304056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.304081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.304220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.304246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.304354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.304397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.304526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.304555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.304705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.304732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.304877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.304904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.305041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.305084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.305274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.305303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.305441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.305467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.305617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.305661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.305797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.305827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.305974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.306004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.306179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.306211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.306375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.306402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.306540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.306584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.306750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.306780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.306932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.306958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.307094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.307121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.307260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.307288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.307404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.307430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.307550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.307577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.307762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.307793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.307955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.307981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.308119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.308163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.308345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.308375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.308507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.308533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.308679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.308709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.308901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.308931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.309061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.309087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.309251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.309293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.948 [2024-10-28 05:11:44.309439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.948 [2024-10-28 05:11:44.309468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.948 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.309619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.309652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.309767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.309794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.309926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.309959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.310152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.310179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.310314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.310342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.310467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.310496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.310656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.310690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.310831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.310857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.310963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.310989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.311102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.311131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.311268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.311294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.311487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.311516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.311674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.311700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.311841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.311885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.312037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.312069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.312235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.312266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.312425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.312457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.312603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.312632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.312783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.312809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.312943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.312970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.313137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.313164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.313276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.313302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.313439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.313465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.313601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.313628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.313795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.313821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.313990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.314016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.314146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.314176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.314347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.314377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.314534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.314566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.314748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.314778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.314946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.314972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.315106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.315133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.315313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.315340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.315478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.315504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.315664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.315694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.315837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.315866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.316053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.316079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.316236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.316265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.316414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.316444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.949 qpair failed and we were unable to recover it. 00:35:53.949 [2024-10-28 05:11:44.316609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.949 [2024-10-28 05:11:44.316640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.316774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.316820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.316970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.316999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.317164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.317192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.317332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.317380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.317532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.317564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.317730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.317757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.317899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.317925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.318065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.318095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.318252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.318278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.318465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.318496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.318651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.318683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.318871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.318897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.319039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.319065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.319178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.319204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.319339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.319365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.319531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.319557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.319701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.319748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.319907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.319934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.320085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.320129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.320281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.320310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.320436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.320461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.320607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.320639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.320844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.320873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.321009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.321035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.321175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.321201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.321380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.321407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.321573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.321602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.321721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.321749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.321933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.321965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.322132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.322159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.322333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.322359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.322549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.322575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.322680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.950 [2024-10-28 05:11:44.322707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.950 qpair failed and we were unable to recover it. 00:35:53.950 [2024-10-28 05:11:44.322872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.322900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.323075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.323105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.323247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.323274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.323430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.323458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.323589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.323618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.323789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.323816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.323978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.324004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.324141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.324168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.324341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.324367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.324503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.324529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.324694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.324739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.324874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.324900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.325021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.325047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.325211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.325241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.325410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.325438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.325578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.325605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.325805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.325832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.325967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.325993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.326107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.326134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.326276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.326302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.326470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.326497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.326604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.326630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.326807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.326834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.326976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.327003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.327160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.327189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.327377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.327406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.327591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.327620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.327822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.327852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.328032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.328067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.328252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.328278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.328437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.328466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.328590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.328620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.328823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.328849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.328965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.328992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.329133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.329159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.329344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.329371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.329526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.329555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.329710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.329739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.329873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.329898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.330060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.951 [2024-10-28 05:11:44.330085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.951 qpair failed and we were unable to recover it. 00:35:53.951 [2024-10-28 05:11:44.330258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.330289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.330459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.330491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.330691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.330719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.330884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.330927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.331125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.331152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.331296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.331323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.331450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.331477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.331687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.331714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.331864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.331892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.332045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.332074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.332208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.332234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.332369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.332396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.332552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.332582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.332743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.332770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.332936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.332962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.333071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.333101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.333252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.333279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.333422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.333466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.333656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.333682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.333820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.333847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.334041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.334071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.334191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.334220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.334387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.334413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.334519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.334545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.334686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.334713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.334906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.334932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.335094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.335119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.335259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.335285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.335469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.335497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.335689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.335731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.335928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.335964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.336127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.336158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.336326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.336356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.336492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.336522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.336700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.336731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.336915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.336949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.337135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.337169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.337356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.337386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.337533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.337577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.952 [2024-10-28 05:11:44.337749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.952 [2024-10-28 05:11:44.337780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.952 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.337942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.337968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.338083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.338127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.338285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.338321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.338476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.338507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.338645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.338706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.338847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.338874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.339002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.339030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.339173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.339202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.339392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.339418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.339535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.339561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.339709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.339736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.339859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.339886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.339994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.340020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.340154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.340181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.340320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.340363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.340503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.340530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.340668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.340697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.340843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.340869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.340993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.341020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.341203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.341233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.341427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.341454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.341557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.341583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.341727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.341754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.341892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.341936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.342078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.342104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.342265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.342292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.342456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.342502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.342644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.342676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.342809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.342835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.343004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.343030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.343202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.343228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.343367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.343397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.343539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.343566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.343705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.343732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.343852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.343877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.344072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.344102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.344262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.344288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.344475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.344505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.344627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.344663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.953 [2024-10-28 05:11:44.344829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.953 [2024-10-28 05:11:44.344855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.953 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.345010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.345038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.345218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.345244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.345383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.345409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.345517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.345543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.345685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.345713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.345828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.345855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.346052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.346081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.346235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.346264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.346395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.346425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.346603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.346672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.346830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.346856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.347027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.347054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.347162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.347189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.347329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.347355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.347494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.347522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.347690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.347717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.347835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.347862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.347991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.348018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.348193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.348220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.348347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.348378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.348508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.348536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.348677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.348704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.348845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.348871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.349018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.349045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.349182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.349210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.349396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.349422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.349589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.349615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.349770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.349797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.349952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.349982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.350112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.350139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.350257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.350287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.350463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.350490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.350632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.350680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.350819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.350846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.351037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.351066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.351254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.351281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.351400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.351427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.351533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.351560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.954 [2024-10-28 05:11:44.351716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.954 [2024-10-28 05:11:44.351743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.954 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.351883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.351918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.352055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.352098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.352260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.352289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.352430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.352459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.352628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.352661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.352820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.352846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.352978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.353004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.353142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.353168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.353309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.353335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.353501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.353527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.353681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.353726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.353868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.353894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.354036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.354065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.354180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.354206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.354384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.354409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.354527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.354571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.354722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.354749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.354871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.354897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.355039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.355089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.355245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.355288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.355455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.355481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.355587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.355628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.355823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.355850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.355968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.355994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.356098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.356124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.356258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.356287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.356422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.356448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.356559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.356585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.356724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.356752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.356895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.356922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.357056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.357082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.357214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.357240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.357426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.357456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.357595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.357646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.955 [2024-10-28 05:11:44.357806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.955 [2024-10-28 05:11:44.357833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.955 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.357973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.357998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.358151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.358180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.358363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.358388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.358526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.358555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.358703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.358729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.358870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.358898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.359077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.359103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.359241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.359267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.359433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.359463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.359619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.359654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.359795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.359841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.360009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.360039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.360178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.360204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.360366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.360392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.360555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.360583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.360731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.360758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.360924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.360950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.361132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.361158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.361268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.361294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.361434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.361475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.361619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.361653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.361841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.361867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.361983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.362008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.362178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.362204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.362344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.362370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.362479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.362504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.362657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.362684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.362793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.362818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.362930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.362957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.363138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.363167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.363328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.363353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.363469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.363495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.363601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.363627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.363772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.363798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.363941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.363967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.364107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.364133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.364324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.364350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.364486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.364528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.364682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.364712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.364898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.956 [2024-10-28 05:11:44.364924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.956 qpair failed and we were unable to recover it. 00:35:53.956 [2024-10-28 05:11:44.365036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.365080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.365223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.365265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.365403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.365429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.365611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.365645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.365803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.365832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.365987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.366013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.366195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.366224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.366374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.366415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.366517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.366543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.366684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.366726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.366877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.366905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.367066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.367096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.367212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.367238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.367347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.367372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.367505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.367530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.367716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.367745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.367926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.367955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.368082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.368108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.368215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.368241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.368374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.368399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.368534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.368560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.368675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.368718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.368873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.368899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.369059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.369085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.369250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.369276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.369416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.369442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.369608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.369638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.369792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.369817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.369961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.370002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.370162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.370188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.370321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.370361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.370524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.370551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.370686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.370713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.370855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.370881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.371044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.371070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.371217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.371243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.371379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.371405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.371563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.371591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.371781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.371812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.957 qpair failed and we were unable to recover it. 00:35:53.957 [2024-10-28 05:11:44.371995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.957 [2024-10-28 05:11:44.372024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.372174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.372203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.372357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.372382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.372518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.372561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.372749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.372778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.372929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.372955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.373095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.373139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.373288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.373317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.373484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.373510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.373700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.373729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.373869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.373894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.374011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.374038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.374180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.374221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.374403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.374432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.374563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.374589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.374775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.374804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.374923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.374951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.375112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.375138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.375319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.375348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.375512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.375537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.375680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.375707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.375821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.375847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.375955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.375981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.376148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.376175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.376312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.376338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.376472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.376501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.376656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.376687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.376837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.376863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.377026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.377052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.377189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.377215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.377330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.377372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.377495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.377524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.377688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.377714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.377851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.377894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.378049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.378078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.378208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.378235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.378366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.378392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.378587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.378615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.378783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.378809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.378946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.958 [2024-10-28 05:11:44.378988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.958 qpair failed and we were unable to recover it. 00:35:53.958 [2024-10-28 05:11:44.379140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.379169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.379320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.379346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.379476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.379501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.379673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.379704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.379891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.379917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.380023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.380049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.380188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.380216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.380405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.380431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.380580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.380606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.380784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.380810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.380928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.380954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.381085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.381111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.381302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.381331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.381490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.381516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.381675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.381705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.381863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.381889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.382000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.382025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.382163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.382188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.382382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.382410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.382535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.382561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.382671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.382698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.382830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.382858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.383013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.383038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.383171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.383215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.383360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.383385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.383525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.383550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.383685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.383729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.383889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.383919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.384080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.384106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.384286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.384315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.384453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.384479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.384618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.384651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.384781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.384825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.385009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.385035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.385202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.959 [2024-10-28 05:11:44.385228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.959 qpair failed and we were unable to recover it. 00:35:53.959 [2024-10-28 05:11:44.385351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.385380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.385558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.385587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.385736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.385762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.385864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.385890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.386082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.386111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.386246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.386271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.386416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.386442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.386554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.386579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.386698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.386726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.386912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.386941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.387119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.387147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.387281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.387306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.387435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.387461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.387649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.387678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.387860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.387886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.387986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.388028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.388212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.388237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.388361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.388387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.388564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.388592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.388775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.388809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.388975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.389001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.389143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.389169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.389352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.389378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.389516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.389541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.389695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.389725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.389849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.389877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.390032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.390058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.390175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.390201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.390343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.390369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.390507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.390533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.390714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.390744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.390923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.390952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.391086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.391113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.391252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.391278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.391471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.391500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.391668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.391695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.391859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.391885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.392021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.392046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.392200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.392225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.392365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.960 [2024-10-28 05:11:44.392390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.960 qpair failed and we were unable to recover it. 00:35:53.960 [2024-10-28 05:11:44.392567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.392592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.392719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.392745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.392885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.392928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.393087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.393114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.393246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.393271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.393412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.393438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.393568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.393598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.393742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.393769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.393926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.393954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.394105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.394135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.394295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.394321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.394437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.394479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.394600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.394662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.394784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.394810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.394945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.394987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.395174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.395199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.395367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.395393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.395545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.395573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.395751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.395781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.395923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.395949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.396057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.396083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.396246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.396271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.396379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.396405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.396515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.396541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.396705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.396734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.396924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.396950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.397065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.397092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.397206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.397232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.397364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.397390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.397492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.397518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.397649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.397675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.397814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.397840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.397999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.398040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.398157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.398185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.398345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.398371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.398512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.398537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.398659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.398686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.398792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.398818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.398958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.398984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.399116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-10-28 05:11:44.399142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-10-28 05:11:44.399251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.399277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.399436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.399461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.399561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.399587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.399750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.399776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.399971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.399997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.400131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.400158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.400275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.400301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.400469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.400495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.400663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.400693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.400853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.400879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.401011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.401037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.401147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.401173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.401313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.401339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.401535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.401564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.401732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.401759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.401868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.401893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.402080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.402109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.402229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.402258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.402415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.402441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.402605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.402630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.402831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.402860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.403031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.403056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.403164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.403190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.403384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.403412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.403566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.403592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.403741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.403768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.403939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.403965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.404126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.404151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.404333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.404362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.404550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.404576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.404706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.404732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.404864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.404907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.405057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.405086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.405241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.405266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.405378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.405408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.405573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.405603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.405766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.405792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.405904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.405930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.406068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.406093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.406235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-10-28 05:11:44.406261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-10-28 05:11:44.406402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.406443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.406594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.406623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.406788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.406814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.406952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.406977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.407092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.407118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.407217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.407242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.407345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.407370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.407516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.407541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.407656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.407683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.407819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.407844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.408034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.408059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.408197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.408223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.408392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.408417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.408593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.408622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.408786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.408812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.408950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.408994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.409142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.409171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.409333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.409359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.409489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.409532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.409699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.409725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.409868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.409893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.410033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.410063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.410200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.410226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.410401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.410427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.410590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.410616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.410826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.410852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.411026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.411052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.411212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.411241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.411389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.411417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.411580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.411606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.411773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.411799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.411931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.411961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.412121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.412147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.412287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.412331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.412522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.412547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.412691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.412717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.412872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.412900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.413068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.413094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.413226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.413252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-10-28 05:11:44.413390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-10-28 05:11:44.413433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.413580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.413609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.413748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.413774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.413892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.413918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.414080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.414107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.414209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.414235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.414373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.414398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.414562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.414592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.414773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.414799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.414967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.415019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.415147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.415177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.415364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.415390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.415546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.415574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.415698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.415728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.415859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.415885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.415996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.416021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.416158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.416184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.416305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.416330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.416471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.416514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.416681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.416707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.416828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.416854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.416991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.417017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.417163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.417205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.417389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.417415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.417598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.417627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.417760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.417790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.417953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.417979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.418096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.418122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.418257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.418282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.418426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.418453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.418570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.418614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.418802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.418829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.418947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.418973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.419137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.419163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.419287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.419315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.419501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-10-28 05:11:44.419527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-10-28 05:11:44.419713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.419742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.419898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.419927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.420090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.420116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.420220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.420245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.420411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.420440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.420598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.420623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.420815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.420844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.420974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.421003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.421169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.421194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.421299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.421325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.421447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.421476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.421630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.421662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.421814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.421844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.421989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.422018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.422188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.422214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.422353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.422379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.422563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.422592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.422757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.422784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.422924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.422949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.423122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.423148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.423316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.423341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.423503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.423532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.423680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.423710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.423875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.423901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.424008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.424035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.424229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.424258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.424396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.424423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.424559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.424585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.424740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.424782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.424941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.424967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.425081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.425108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.425308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.425334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.425476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.425501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.425643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.425670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.425855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.425884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.426038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.426064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.426174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.426216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.426374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.426403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.426584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.426609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-10-28 05:11:44.426761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-10-28 05:11:44.426788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.426945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.426974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.427113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.427143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.427305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.427332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.427491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.427520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.427679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.427705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.427845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.427870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.427985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.428012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.428179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.428204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.428364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.428393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.428550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.428579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.428739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.428765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.428908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.428933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.429077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.429104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.429238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.429263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.429369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.429394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.429509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.429536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.429649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.429676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.429783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.429808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.430004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.430029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.430192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.430218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.430371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.430400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.430553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.430581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.430749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.430775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.430915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.430940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.431047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.431074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.431241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.431266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.431449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.431477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.431647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.431674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.431806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.431836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.432017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.432046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.432164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.432193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.432347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.432372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.432512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.432537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.432678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.432705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.432884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.432909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.433047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.433073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.433210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.433236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.433432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.433458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.433611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-10-28 05:11:44.433646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-10-28 05:11:44.433765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.433793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.433953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.433978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.434117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.434159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.434315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.434344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.434497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.434522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.434703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.434733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.434912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.434940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.435130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.435155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.435296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.435322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.435487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.435530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.435666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.435693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.435831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.435857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.436012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.436040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.436168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.436195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.436337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.436363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.436522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.436550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.436719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.436745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.436914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.436940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.437079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.437105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.437285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.437311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.437474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.437500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.437627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.437661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.437820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.437846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.437987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.438012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.438161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.438187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.438354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.438380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.438561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.438590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.438760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.438790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.438919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.438945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.439082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.439108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.439244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.439273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.439437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.439463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.439627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.439661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.439822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.439848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.440021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.440047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.440197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.440226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.440367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.440394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.440560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.440586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.440773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.440802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.440979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-10-28 05:11:44.441008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-10-28 05:11:44.441167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.441193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.441308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.441352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.441541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.441570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.441699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.441726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.441837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.441864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.442004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.442033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.442225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.442251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.442387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.442413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.442558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.442587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.442756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.442782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.442924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.442968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.443125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.443153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.443289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.443315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.443455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.443481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.443591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.443617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.443786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.443812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.443948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.443974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.444112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.444142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.444315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.444340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.444478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.444521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.444690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.444719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.444869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.444894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.445024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.445049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.445219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.445247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.445377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.445403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.445532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.445558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.445723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.445752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.445908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.445934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.446070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.446113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.446279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.446308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.446468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.446493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.446689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.446719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.446909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.446935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.447083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.447109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-10-28 05:11:44.447223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-10-28 05:11:44.447265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.447441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.447469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.447606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.447632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.447776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.447816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.447962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.447991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.448152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.448177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.448318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.448362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.448480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.448508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.448661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.448687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.448871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.448899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.449058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.449091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.449252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.449278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.449415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.449440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.449616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.449647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.449793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.449819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.449957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.449983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.450134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.450163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.450330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.450356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.450537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.450566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.450718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.450760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.450863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.450889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.451019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.451045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.451172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.451201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.451335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.451361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.451505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.451547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.451726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.451755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.451936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.451961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.452154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.452183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.452360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.452389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.452541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.452567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.452685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.452729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.452911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.452938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.453112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.453139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.453288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.453317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.453462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.453491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.453651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.453678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.453855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.453884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.454038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.454071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.454225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.454251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.454393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.454419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.454527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-10-28 05:11:44.454553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-10-28 05:11:44.454685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.454711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.454898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.454927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.455122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.455149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.455292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.455318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.455456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.455497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.455681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.455711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.455841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.455867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.455979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.456006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.456170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.456196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.456339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.456364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.456524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.456560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.456759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.456791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.456983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.457014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.457248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.457298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.457463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.457489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.457658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.457684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.457846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.457875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.458025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.458053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.458206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.458232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.458375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.458402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.458538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.458564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.458698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.458724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.458859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.458885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.459057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.459082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.459219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.459245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.459384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.459429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.459581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.459610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.459776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.459803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.459942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.459968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.460085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.460111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.460249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.460275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.460437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.460463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.460648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.460677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.460859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.460885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.461045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.461071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.461236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.461278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.461403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.461429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.461592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.461618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.461797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.461823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-10-28 05:11:44.461932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-10-28 05:11:44.461958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.462073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.462099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.462264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.462305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.462487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.462513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.462625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.462684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.462810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.462836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.462979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.463004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.463167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.463193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.463380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.463405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.463542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.463568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.463734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.463760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.463869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.463895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.464077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.464103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.464241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.464287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.464486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.464513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.464653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.464680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.464791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.464817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.464957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.464984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.465166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.465192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.465329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.465372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.465542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.465568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.465712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.465739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.465880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.465908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.466076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.466103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.466237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.466263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.466417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.466451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.466627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.466665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.466860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.466886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.467041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.467071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.467200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.467229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.467389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.467416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.467555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.467581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.467701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.467727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.467870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.467898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.468012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.468055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.468232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.468261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.468449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.468476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.468623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.468659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.468816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.468844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.468990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-10-28 05:11:44.469017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-10-28 05:11:44.469134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.469160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.469300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.469326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.469443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.469470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.469653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.469683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.469822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.469849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.470017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.470043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.470207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.470263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.470438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.470467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.470626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.470665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.470806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.470833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.470998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.471028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.471187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.471214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.471335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.471365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.471505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.471531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.471711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.471738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.471853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.471880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.472034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.472063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.472224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.472250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.472386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.472430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.472571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.472596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.472770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.472797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.472910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.472937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.473048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.473075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.473212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.473239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.473370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.473402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.473556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.473586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.473763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.473790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.473955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.473980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.474120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.474146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.474287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.474313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.474462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.474491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.474602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.474630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.474795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.474821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.474961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.475003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.475181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.475210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.475343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.475370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.475512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.475539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-10-28 05:11:44.475649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-10-28 05:11:44.475675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.475792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.475818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.475965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.476015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.476173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.476204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.476334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.476360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.476492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.476519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.476679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.476723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.476884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.476911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.477046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.477072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.477212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.477239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.477385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.477411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.477522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.477565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.477719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.477748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.477906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.477932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.478074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.478100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.478304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.478331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.478499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.478526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.478666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.478693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.478807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.478833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.479015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.479041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.479183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.479209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.479348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.479390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.479552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.479579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.479763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.479794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.479921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.479950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.480134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.480160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.480315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.480345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.480493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.480535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.480680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-10-28 05:11:44.480707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-10-28 05:11:44.480851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.480878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.481019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.481047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.481185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.481211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.481349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.481375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.481514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.481540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.481705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.481731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.481838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.481865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.481975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.482001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.482118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.482143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.482286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.482330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.482483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.482512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.482675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.482703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.482842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.482886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.483043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.483073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.483260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.483290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.483395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.483441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.483614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.483650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.483787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.483814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.483929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.483956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.484123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.484152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.484276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.484302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.484463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.484489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.484601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.484627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.484785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.484812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.484947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.484989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.485112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.485141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.485327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.485353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.485462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.485506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.485706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.485736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.485869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.485895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.486013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.486040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.486205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.486234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.486394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.486420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.486560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.486586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.486722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.486749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.486917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.486943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.487084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.487111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.487252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.487278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.487446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.487472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.487613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.487667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.487821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.487851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.488011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.488042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.488165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-10-28 05:11:44.488206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-10-28 05:11:44.488368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.488397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.488591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.488619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.488769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.488796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.488951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.488980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.489104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.489131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.489269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.489295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.489444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.489469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.489671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.489698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.489857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.489886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.490033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.490061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.490248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.490274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.490433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.490461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.490624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.490661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.490845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.490872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.491028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.491057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.491235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.491264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.491420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.491446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.491621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.491657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.491817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.491845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.491983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.492010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.492109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.492135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.492264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.492293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.492454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.492481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.492644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.492673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.492852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.492881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.493045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.493079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.493193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.493222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.493363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.493389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.493503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.493530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.493679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.493706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.493870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.493900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.494054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.494080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.494220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.494246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.494360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.494386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.494524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.494550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.494739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.494769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.494893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.494922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.495051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.495077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.495215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.495241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.495363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.495392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.495502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.495527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.495673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.495700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.495815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.495841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.975 [2024-10-28 05:11:44.495984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.975 [2024-10-28 05:11:44.496010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.975 qpair failed and we were unable to recover it. 00:35:53.976 [2024-10-28 05:11:44.496156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.976 [2024-10-28 05:11:44.496185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.976 qpair failed and we were unable to recover it. 00:35:53.976 [2024-10-28 05:11:44.496343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.976 [2024-10-28 05:11:44.496372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.976 qpair failed and we were unable to recover it. 00:35:53.976 [2024-10-28 05:11:44.496536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.976 [2024-10-28 05:11:44.496563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.976 qpair failed and we were unable to recover it. 00:35:53.976 [2024-10-28 05:11:44.496675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.976 [2024-10-28 05:11:44.496702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.976 qpair failed and we were unable to recover it. 00:35:53.976 [2024-10-28 05:11:44.496817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.976 [2024-10-28 05:11:44.496844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:53.976 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.496978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.497005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.497139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.497165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.497295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.497321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.497438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.497464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.497595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.497621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.497750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.497780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.497945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.497972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.498085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.498112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.498248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.498274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.498413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.498439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.498588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.498613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.498745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.498772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.498908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.498936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.499049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.499076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.499216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.499243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.499385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.499411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.499523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.499549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.499685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.261 [2024-10-28 05:11:44.499727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.261 qpair failed and we were unable to recover it. 00:35:54.261 [2024-10-28 05:11:44.499888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.499916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.500058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.500084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.500255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.500281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.500391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.500418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.500531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.500558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.500674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.500702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.500819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.500846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.500956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.500983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.501120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.501146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.501343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.501369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.501474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.501517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.501641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.501671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.501857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.501885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.502005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.502031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.502156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.502184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.502325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.502350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.502494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.502520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.502686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.502713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.502823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.502850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.503015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.503042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.503155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.503182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.503318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.503343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.503493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.503522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.503700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.503731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.503893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.503920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.504029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.504055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.504224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.504256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.504444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.504471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.504629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.504678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.504804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.504832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.505018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.505045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.505233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.505262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.505405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.505432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.505568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.505594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.505763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.505794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.505973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.505999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.506139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.506168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.506351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.506380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.506535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.506565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.262 [2024-10-28 05:11:44.506734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.262 [2024-10-28 05:11:44.506762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.262 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.506906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.506933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.507104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.507133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.507274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.507301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.507472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.507517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.507682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.507713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.507854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.507883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.508053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.508079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.508214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.508244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.508435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.508462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.508561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.508605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.508792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.508823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.508981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.509008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.509148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.509174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.509314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.509341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.509516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.509542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.509726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.509755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.509912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.509941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.510097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.510124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.510266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.510308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.510459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.510489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.510657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.510684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.510841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.510871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.511021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.511051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.511244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.511270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.511381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.511408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.511548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.511574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.511691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.511718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.511862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.511905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.512082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.512112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.512275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.512302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.512438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.512482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.512628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.512664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.512824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.512850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.513034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.513063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.513243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.513272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.513436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.513462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.513604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.513630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.513775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.513805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.513926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.513952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.263 [2024-10-28 05:11:44.514112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.263 [2024-10-28 05:11:44.514157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.263 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.514315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.514360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.514501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.514531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.514678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.514705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.514848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.514874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.515015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.515043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.515206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.515232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.515366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.515396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.515548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.515575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.515690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.515718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.515853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.515879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.516019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.516045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.516153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.516196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.516373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.516402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.516594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.516620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.516791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.516821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.516976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.517008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.517127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.517154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.517294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.517319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.517475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.517504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.517692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.517719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.517836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.517862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.517994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.518024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.518177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.518204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.518311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.518336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.518476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.518504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.518661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.518688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.518794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.518821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.518988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.519017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.519182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.519208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.519364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.519394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.519581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.519607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.519724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.519751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.519916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.519943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.520078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.520104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.520282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.520308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.520443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.520469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.520601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.520630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.520831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.520857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.521013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.521042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.264 qpair failed and we were unable to recover it. 00:35:54.264 [2024-10-28 05:11:44.521191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.264 [2024-10-28 05:11:44.521220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.521375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.521401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.521595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.521624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.521786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.521815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.521975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.522001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.522166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.522194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.522361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.522392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.522578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.522604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.522725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.522752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.522931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.522961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.523090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.523115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.523251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.523277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.523434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.523463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.523661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.523689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.523849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.523878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.523991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.524020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.524187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.524214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.524349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.524394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.524603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.524650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.524769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.524797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.524974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.525000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.525132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.525161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.525294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.525322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.525466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.525491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.525632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.525665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.525810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.525837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.525998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.526027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.526145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.526178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.526312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.526339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.526488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.526514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.526626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.526659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.526774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.526799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.526910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.526939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.527070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.527097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.265 [2024-10-28 05:11:44.527245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.265 [2024-10-28 05:11:44.527272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.265 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.527444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.527474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.527648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.527692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.527838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.527866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.528008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.528036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.528179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.528205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.528309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.528336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.528477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.528502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.528667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.528698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.528854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.528880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.528989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.529015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.529170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.529198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.529328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.529354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.529524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.529568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.529731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.529763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.529891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.529917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.530058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.530084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.530197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.530222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.530359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.530385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.530491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.530517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.530659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.530689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.530854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.530881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.531014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.531043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.531219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.531248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.531407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.531434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.531604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.531631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.531783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.531814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.531977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.532003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.532142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.532168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.532329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.532358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.532508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.532534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.532670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.532697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.532802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.532828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.532941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.532968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.533108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.533134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.533325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.533354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.533510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.533536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.533682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.533708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.533864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.533893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.534027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.534053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.266 qpair failed and we were unable to recover it. 00:35:54.266 [2024-10-28 05:11:44.534193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.266 [2024-10-28 05:11:44.534219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.534336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.534362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.534508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.534535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.534677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.534703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.534873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.534916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.535052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.535083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.535278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.535308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.535465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.535495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.535652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.535688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.535829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.535856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.535972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.536001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.536118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.536143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.536303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.536329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.536442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.536469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.536588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.536613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.536797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.536824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.536938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.536979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.537113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.537140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.537280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.537306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.537449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.537491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.537670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.537697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.537860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.537887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.538024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.538054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.538209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.538235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.538389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.538418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.538601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.538631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.538818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.538847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.539031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.539060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.539208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.539237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.539420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.539445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.539603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.539632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.539797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.539827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.539984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.540011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.540147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.540173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.540303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.540335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.540516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.540548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.540662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.540689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.540830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.540859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.540996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.541022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.541189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.267 [2024-10-28 05:11:44.541214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.267 qpair failed and we were unable to recover it. 00:35:54.267 [2024-10-28 05:11:44.541346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.541377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.541531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.541558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.541699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.541726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.541861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.541888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.542060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.542086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.542250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.542277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.542382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.542408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.542543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.542569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.542705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.542748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.542936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.542962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.543129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.543156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.543309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.543338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.543462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.543492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.543656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.543683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.543852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.543881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.544036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.544064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.544228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.544256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.544424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.544451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.544563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.544589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.544741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.544767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.544881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.544909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.545051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.545077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.545190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.545220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.545354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.545381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.545520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.545547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.545690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.545717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.545828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.545854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.546010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.546049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.546199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.546227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.546401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.546427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.546605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.546631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.546823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.546850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.546988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.547015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.547133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.547158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.547328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.547354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.547541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.547570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.547738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.547769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.547889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.547915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.548056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.548083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.548223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.548254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.268 qpair failed and we were unable to recover it. 00:35:54.268 [2024-10-28 05:11:44.548413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.268 [2024-10-28 05:11:44.548439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.548571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.548615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.548777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.548806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.548998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.549024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.549176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.549203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.549346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.549375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.549496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.549522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.549629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.549661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.549803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.549829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.549969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.550000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.550169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.550195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.550357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.550388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.550567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.550594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.550767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.550809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.550948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.550974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.551123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.551148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.551281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.551309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.551530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.551556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.551694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.551722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.551856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.551900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.552044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.552075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.552211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.552237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.552378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.552404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.552565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.552605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.552760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.552789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.552899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.552925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.553120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.553149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.553311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.553337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.553503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.553529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.553708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.553742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.553870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.553897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.554046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.554090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.554240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.554266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.554427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.554454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.554604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.554641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.554793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.554824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.554989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.555020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.555132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.555159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.555324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.555366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.555505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.555531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.269 [2024-10-28 05:11:44.555672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.269 [2024-10-28 05:11:44.555701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.269 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.555840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.555867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.556013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.556040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.556212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.556241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.556364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.556394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.556546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.556573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.556739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.556766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.556909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.556935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.557117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.557143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.557285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.557311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.557449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.557475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.557686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.557713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.557865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.557894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.558041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.558070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.558241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.558269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.558431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.558460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.558608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.558661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.558809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.558836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.558949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.558976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.559111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.559141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.559307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.559334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.559528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.559554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.559677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.559718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.559865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.559898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.560067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.560093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.560264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.560293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.560456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.560482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.560617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.560649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.560790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.560818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.560957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.560984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.561169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.561199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.561348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.561378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.561551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.561578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.561686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.561712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.561891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.561917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.562034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.270 [2024-10-28 05:11:44.562060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.270 qpair failed and we were unable to recover it. 00:35:54.270 [2024-10-28 05:11:44.562193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.562219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.562407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.562437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.562559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.562587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.562712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.562738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.562907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.562936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.563101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.563127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.563233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.563258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.563392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.563422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.563589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.563616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.563767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.563795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.563928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.563955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.564089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.564115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.564265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.564294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.564417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.564446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.564595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.564626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.564766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.564810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.565005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.565035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.565178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.565204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.565344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.565370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.565483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.565509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.565644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.565672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.565836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.565862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.565972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.566000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.566143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.566169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.566320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.566349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.566513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.566539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.566672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.566701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.566809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.566850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.567013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.567045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.567235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.567261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.567411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.567440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.567593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.567621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.567818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.567844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.567947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.567994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.568129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.568158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.568292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.568318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.568455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.568481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.568670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.568699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.568834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.568861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.569005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.569031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.271 qpair failed and we were unable to recover it. 00:35:54.271 [2024-10-28 05:11:44.569203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.271 [2024-10-28 05:11:44.569233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.569378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.569409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.569549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.569575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.569815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.569855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.570007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.570036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.570175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.570218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.570367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.570396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.570556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.570582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.570727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.570754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.570867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.570893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.571034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.571060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.571206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.571232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.571397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.571424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.571567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.571593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.571702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.571728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.571876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.571902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.572051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.572077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.572223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.572264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.572399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.572427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.572592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.572618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.572764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.572791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.572904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.572932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.573046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.573072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.573211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.573237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.573373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.573420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.573583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.573611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.573757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.573783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.573931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.573958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.574072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.574103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.574238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.574265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.574463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.574488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.574629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.574675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.574813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.574840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.574991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.575018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.575135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.575162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.575326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.575351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.575488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.575517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.575679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.575706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.575873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.575899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.576061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.576091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.272 qpair failed and we were unable to recover it. 00:35:54.272 [2024-10-28 05:11:44.576226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.272 [2024-10-28 05:11:44.576253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.576422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.576448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.576620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.576660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.576814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.576841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.576992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.577018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.577185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.577212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.577368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.577396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.577534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.577560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.577723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.577768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.577937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.577965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.578129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.578155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.578298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.578327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.578485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.578511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.578651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.578677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.578817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.578850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.579014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.579046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.579234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.579263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.579426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.579453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.579618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.579651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.579778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.579809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.579988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.580017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.580178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.580204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.580319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.580345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.580487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.580516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.580687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.580713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.580862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.580892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.581064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.581093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.581225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.581251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.581414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.581457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.581643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.581673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.581838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.581868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.582013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.582040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.582177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.582204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.582333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.582359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.582520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.582562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.582722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.582752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.582911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.582945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.583095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.583121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.583260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.583287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.583456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.583483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.273 [2024-10-28 05:11:44.583640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.273 [2024-10-28 05:11:44.583670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.273 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.583824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.583853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.584017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.584051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.584195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.584222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.584364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.584406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.584562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.584590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.584745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.584791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.584914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.584942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.585099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.585126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.585308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.585338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.585542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.585586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.585768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.585797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.585966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.585992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.586169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.586195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.586347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.586372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.586538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.586565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.586697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.586724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.586869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.586895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.587058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.587088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.587217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.587246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.587418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.587444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.587583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.587611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.587795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.587839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.588008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.588037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.588178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.588205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.588366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.588396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.588550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.588577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.588742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.588768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.588912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.588941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.589093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.589126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.589268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.589294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.589462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.589490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.589632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.589664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.589774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.589801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.589937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.589965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.590149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.274 [2024-10-28 05:11:44.590175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.274 qpair failed and we were unable to recover it. 00:35:54.274 [2024-10-28 05:11:44.590317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.590342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.590490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.590518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.590659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.590701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.590850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.590882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.591005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.591035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.591189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.591215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.591353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.591397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.591515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.591545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.591688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.591715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.591865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.591891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.592065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.592096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.592256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.592283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.592423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.592467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.592647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.592684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.592838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.592864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.592983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.593010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.593146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.593173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.593291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.593320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.593487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.593529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.593670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.593699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.593855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.593885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.593991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.594019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.594149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.594175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.594313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.594340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.594522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.594553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.594696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.594726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.594888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.594914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.595056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.595081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.595224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.595250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.595394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.595420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.595573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.595603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.595778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.595807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.595940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.595967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.596110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.596155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.596320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.596352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.596479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.596505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.596700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.596729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.596910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.596939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.597124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.597152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.597255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-10-28 05:11:44.597298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-10-28 05:11:44.597461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.597488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.597624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.597656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.597806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.597832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.597984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.598010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.598145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.598172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.598357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.598387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.598510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.598540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.598696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.598727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.598869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.598914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.599067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.599097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.599264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.599291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.599477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.599507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.600133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.600166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.600360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.600387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.600498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.600524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.600655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.600683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.600801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.600827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.600945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.600972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.601140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.601172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.601333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.601362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.601518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.601548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.601730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.601775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.601978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.602007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.602123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.602152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.602294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.602319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.602450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.602477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.602602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.602629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.602748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.602775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.602914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.602940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.603077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.603104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.603260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.603289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.603442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.603471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.603652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.603682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.603831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.603859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.604022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.604052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.604161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.604189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.604324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.604352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.604515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.604541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.604686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.604713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.604855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.604884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-10-28 05:11:44.605001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-10-28 05:11:44.605026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.605163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.605190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.605321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.605350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.605502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.605528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.605668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.605706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.605864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.605894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.606054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.606081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.606222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.606248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.606430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.606460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.606627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.606663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.606821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.606847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.607013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.607042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.607200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.607226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.607371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.607397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.607504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.607530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.607660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.607694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.607811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.607837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.607990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.608018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.608197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.608223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.608361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.608405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.608562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.608591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.608738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.608769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.608912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.608939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.609138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.609170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.609337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.609363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.609499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.609543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.613652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.613701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.613878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.613908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.614043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.614071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.614238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.614269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.614461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.614489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.614696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.614727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.614878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.614917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.615077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.615104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.615239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.615265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.615443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.615486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.615684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.615711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.615846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.615874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.616036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.616065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.616201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.616227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-10-28 05:11:44.616363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-10-28 05:11:44.616389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.616547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.616576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.616744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.616771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.616903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.616948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.617115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.617140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.617280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.617307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.617452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.617494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.617608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.617644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.617783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.617809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.617959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.617986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.618126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.618155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.618344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.618370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.618523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.618552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.618673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.618703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.618869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.618895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.619079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.619108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.619296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.619322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.619457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.619483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.619674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.619704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.619847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.619877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.620075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.620101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.620215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.620240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.620376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.620405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.620545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.620572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.620733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.620760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.620904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.620933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.621064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.621090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.621224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.621249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.621389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.621431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.621567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.621593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.621768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.621795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.621898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.621924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.622067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.622095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.622228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.622254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.622387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.622416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.622543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.622569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.622739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.622766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.622987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.623014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-10-28 05:11:44.623129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-10-28 05:11:44.623156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.623293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.623319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.623476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.623505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.623657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.623683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.623866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.623895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.624050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.624076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.624221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.624247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.624397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.624423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.624556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.624586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.624721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.624748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.624904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.624933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.625107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.625140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.625299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.625325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.625479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.625508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.625643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.625670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.625836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.625862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.626000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.626027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.626164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.626191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.626310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.626336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.626501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.626527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.626698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.626730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.626895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.626922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.627056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.627100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.627264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.627293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.627429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.627455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.627576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.627603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.627776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.627806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.627964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.627990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.628127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.628153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.628298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.628339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.628503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.628530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.628643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.628670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.628858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.628884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.629002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.629029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.629192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.629218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.629411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.629440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.629605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.629631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.629755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-10-28 05:11:44.629781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-10-28 05:11:44.629960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.630011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.630179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.630206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.630385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.630413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.630597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.630625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.630776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.630803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.630945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.630970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.631083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.631110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.631245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.631271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.631407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.631451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.631605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.631644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.631815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.631844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.631991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.632017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.632181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.632211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.632342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.632368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.632517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.632544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.632679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.632706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.632839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.632865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.633025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.633051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.633253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.633280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.633419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.633446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.633586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.633614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.633760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.633803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.633946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.633972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.634112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.634138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.634293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.634323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.634457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.634483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.634644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.634701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.634882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.634915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.635046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.635073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.635214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.635241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.635432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.635461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.635603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.635629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.635754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.635785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.635945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.635976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.636139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.636165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.636350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.636379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.636507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.636536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.636687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.636715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.636882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.636929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.637106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-10-28 05:11:44.637135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-10-28 05:11:44.637300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.637326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.637476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.637523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.637691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.637721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.637883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.637920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.638100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.638129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.638269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.638298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.638453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.638481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.638622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.638672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.638824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.638853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.639013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.639040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.639175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.639202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.639369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.639398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.639559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.639586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.639725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.639752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.639916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.639961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.640106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.640133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.640280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.640307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.640470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.640495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.640657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.640689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.640868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.640897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.641024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.641053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.641221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.641247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.641382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.641408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.641577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.641606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.641752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.641779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.641922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.641964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.642127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.642156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.642274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.642300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.642448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.642478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.642580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.642606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.642777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.642804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.642956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.642985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.643126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.643153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.643314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.643341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.643495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.643524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.643674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.643703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.643867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.643899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.644089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.644118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.644287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.644314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.644448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.644475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-10-28 05:11:44.644587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-10-28 05:11:44.644613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.644790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.644820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.644962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.644989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.645154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.645201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.645378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.645407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.645562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.645588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.645721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.645748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.645886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.645918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.646056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.646082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.646192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.646219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.646338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.646364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.646483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.646509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.646697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.646728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.646905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.646931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.647072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.647097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.647250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.647284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.647440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.647467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.647631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.647664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.647861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.647887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.648021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.648048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.648178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.648203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.648342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.648386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.648541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.648570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.648698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.648725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.648868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.648894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.649023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.649052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.649195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.649221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.649386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.649412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.649548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.649574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.649725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.649752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.649886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.649912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.650052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.650078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.650248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.650275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.650409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.650453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.650605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.650657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.650843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.650869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.651008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.651038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.651185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.651215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.651376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.651403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.651547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.651574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-10-28 05:11:44.651714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-10-28 05:11:44.651741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.651888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.651914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.652025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.652077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.652231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.652260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.652421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.652447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.652586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.652631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.652798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.652831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.653015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.653042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.653200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.653229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.653380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.653410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.653575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.653601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.653740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.653784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.653938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.653967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.654131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.654157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.654298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.654326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.654479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.654509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.654681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.654707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.654850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.654894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.655019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.655048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.655211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.655238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.655376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.655420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.655571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.655602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.655746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.655773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.655915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.655941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.656100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.656127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.656271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.656297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.656410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.656436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.656539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.656569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.656737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.656765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.656901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.656928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.657075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.657119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.657279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.657306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.657491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.657520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.657707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.657735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.657877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.657903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.658096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.658125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-10-28 05:11:44.658279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-10-28 05:11:44.658307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.658460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.658486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.658645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.658672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.658811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.658843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.658979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.659005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.659148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.659173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.659356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.659382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.659527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.659553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.659696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.659723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.659864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.659895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.660062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.660088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.660245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.660277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.660463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.660493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.660660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.660686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.660868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.660897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.661078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.661108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.661270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.661297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.661482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.661511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.661647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.661677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.661869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.661895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.662028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.662054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.662165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.662191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.662372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.662399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.662537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.662563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.662696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.662722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.662854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.662880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.663022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.663048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.663210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.663236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.663376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.663402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.663565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.663595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.663752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.663781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.663948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.663975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.664142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.664169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.664311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.664339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.664502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.664532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.664683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.664711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.664855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.664881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.664995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.665022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.665154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.665181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.665292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.665318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.665450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-10-28 05:11:44.665476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-10-28 05:11:44.665615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.665663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.665820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.665850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.666017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.666044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.666201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.666230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.666409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.666438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.666596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.666623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.666760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.666803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.666966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.666995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.667133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.667159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.667334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.667360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.667535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.667561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.667699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.667727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.667910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.667939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.668111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.668139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.668281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.668308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.668448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.668473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.668582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.668609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.668811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.668838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.668972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.669002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.669181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.669210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.669342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.669372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.669513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.669556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.669744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.669774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.669932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.669958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.670115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.670147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.670323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.670352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.670491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.670537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.670698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.670725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.670835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.670861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.670999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.671027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.671183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.671213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.671370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.671395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.671530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.671557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.671698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.671740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.671899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.671930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.672092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.672119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.672258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.672302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.672449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.672478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.672645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.672672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.672829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.672862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-10-28 05:11:44.672984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-10-28 05:11:44.673014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.673173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.673199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.673362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.673405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.673548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.673577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.673737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.673764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.673947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.673976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.674151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.674180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.674341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.674367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.674554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.674582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.674736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.674765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.674904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.674932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.675070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.675098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.675266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.675298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.675463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.675490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.675627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.675669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.675832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.675858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.676020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.676047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.676186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.676214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.676405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.676435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.676619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.676658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.676817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.676844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.676985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.677027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.677183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.677209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.677345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.677390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.677539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.677567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.677729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.677755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.677895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.677938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.678094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.678123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.678307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.678333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.678514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.678543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.678732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.678758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.678898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.678925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.679080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.679109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.679265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.679295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.679434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.679462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.679676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.679707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.679825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.679854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.680040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.680066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.680200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.680226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.680407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.680434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-10-28 05:11:44.680577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-10-28 05:11:44.680604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.680741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.680768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.680910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.680937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.681091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.681117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.681252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.681279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.681415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.681442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.681589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.681660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.681789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.681817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.681972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.681999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.682150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.682195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.682362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.682407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.682528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.682556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.682701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.682729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.682868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.682894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.683057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.683086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.683237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.683266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.683447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.683477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.683632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.683684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.683852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.683878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.684032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.684060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.684215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.684246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.684409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.684436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.684606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.684643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.684829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.684856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.685008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.685037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.685175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.685204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.685346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.685375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.685527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.685556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.685748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.685775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.685935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.685964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.686139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.686168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.686308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.686340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.686461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.686490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.686661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.686702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.686830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.686858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.687021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-10-28 05:11:44.687072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-10-28 05:11:44.687208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.687253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.687382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.687426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.687565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.687591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.687739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.687766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.687910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.687936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.688095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.688140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.688248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.688274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.688440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.688466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.688612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.688645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.688785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.688813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.688986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.689013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.689203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.689247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.689404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.689449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.689628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.689660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.689803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.689831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.689987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.690031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.690225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.690270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.690397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.690441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.690580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.690605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.690776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.690820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.690961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.690988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.691100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.691127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.691296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.691322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.691462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.691489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.691650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.691690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.691874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.691902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.692071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.692103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.692252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.692282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.692411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.692441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.692555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.692584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.692748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.692777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.692956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.693000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.693153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.693197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.693384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.693413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.693566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.693593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.693720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.693746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.693945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.693975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.694129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.694161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.694318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.694348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.694501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-10-28 05:11:44.694536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-10-28 05:11:44.694697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.694724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.694842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.694868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.695016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.695046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.695176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.695205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.695339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.695387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.695557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.695585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.695748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.695775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.695903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.695930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.696136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.696180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.696330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.696374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.696516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.696542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.696658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.696686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.696852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.696878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.697000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.697026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.697190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.697217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.697331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.697357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.697500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.697526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.697672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.697700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.697899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.697943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.698132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.698176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.698327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.698370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.698504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.698531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.698653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.698681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.698841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.698891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.699043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.699086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.699242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.699286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.699454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.699485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.699674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.699719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.699876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.699920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.700106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.700150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.700315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.700341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.700486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.700514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.700683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.700714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.700925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.700968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.701107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.701151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.701290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.701318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.701489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.701516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.701655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.701682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.701841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.701887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-10-28 05:11:44.702053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-10-28 05:11:44.702096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.702243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.702269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.702407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.702434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.702573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.702600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.702760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.702805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.703003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.703032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.703186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.703212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.703353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.703380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.703515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.703542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.703708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.703753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.703949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.703994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.704108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.704136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.704283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.704311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.704452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.704480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.704652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.704679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.704832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.704876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.705010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.705037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.705146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.705173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.705342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.705369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.705475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.705502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.705644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.705672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.705780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.705808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.705924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.705951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.706066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.706092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.706229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.706256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.706424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.706450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.706589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.706616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.706794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.706825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.706951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.706981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.707133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.707178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.707343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.707369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.707507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.707533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.707695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.707739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.707910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.707937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.708104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.708131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.708250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.708276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.708414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.708441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.708609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.708644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.708763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.708789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.708928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-10-28 05:11:44.708954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-10-28 05:11:44.709069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.709096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.709267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.709294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.709456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.709482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.709597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.709623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.709755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.709782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.709918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.709945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.710090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.710117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.710251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.710277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.710421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.710447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.710558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.710584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.710718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.710758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.710906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.710934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.711069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.711096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.711229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.711256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.711401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.711427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.711543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.711573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.711737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.711764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.711923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.711952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.712130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.712160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.712314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.712344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.712492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.712520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.712717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.712744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.712883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.712931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.713091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.713135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.713265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.713309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.713422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.713448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.713610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.713644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.713785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.713816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.713957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.713983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.714148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.714174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.714340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.714366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.714505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.714530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.714646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.714673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.714835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.714861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.715024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.715069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.715206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.715252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.715412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.715456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.715592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.715618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.715766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.715793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.715951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-10-28 05:11:44.715997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-10-28 05:11:44.716164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.716191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.716336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.716362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.716507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.716533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.716672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.716699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.716837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.716863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.717005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.717031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.717158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.717201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.717352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.717378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.717517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.717544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.717706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.717751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.717914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.717947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.718104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.718137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.718349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.718379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.718525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.718554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.718744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.718777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.718940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.718986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.719149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.719192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.719356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.719402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.719555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.719582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.719764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.719807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.720042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.720072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.720282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.720326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.720494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.720521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.720631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.720683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.720828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.720854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.720994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.721021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.721183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.721208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.721343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.721370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.721522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.721549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.721682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.721712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.721854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.721897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.722076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.722103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.722240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.722267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.722385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.722413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.722529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.722555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.722704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-10-28 05:11:44.722731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-10-28 05:11:44.722898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.722924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.723060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.723087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.723204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.723230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.723371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.723398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.723539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.723566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.723731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.723778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.723930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.723973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.724138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.724182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.724325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.724352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.724527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.724553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.724712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.724742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.724888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.724917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.725074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.725100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.725258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.725302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.725466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.725492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.725608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.725640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.725755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.725783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.725946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.725991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.726129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.726177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.726312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.726338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.726480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.726506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.726707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.726753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.726880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.726914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.727073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.727107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.727343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.727372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.727488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.727518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.727644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.727686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.727838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.727867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.728018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.728048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.728188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.728217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.728394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.728424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.728572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.728601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.728756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.728784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.728934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.728964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.729083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.729115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.729263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.729292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.729503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.729531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.729655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.729683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.729857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.729884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.730017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-10-28 05:11:44.730067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-10-28 05:11:44.730231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.730275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.730443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.730470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.730580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.730607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.730759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.730787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.730897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.730941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.731100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.731134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.731288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.731317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.731475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.731504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.731659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.731685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.731856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.731882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.732059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.732089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.732334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.732363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.732516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.732545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.732709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.732736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.732879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.732906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.733039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.733069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.733257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.733286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.733426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.733455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.733647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.733691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.733866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.733893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.734073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.734130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.734269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.734319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.734489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.734517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.734623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.734656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.734822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.734849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.734984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.735011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.735215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.735242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.735409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.735435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.735601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.735628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.735796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.735840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.735967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.736013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.736182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.736209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.736371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.736403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.736542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.736569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.736703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.736734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.736918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.736966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.737149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.737195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.737340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.737367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.737508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.737536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-10-28 05:11:44.737677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-10-28 05:11:44.737704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.737816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.737842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.737987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.738013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.738177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.738204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.738343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.738369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.738515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.738542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.738703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.738749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.738884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.738929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.739079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.739106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.739268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.739294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.739434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.739462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.739599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.739626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.739806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.739832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.739967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.740014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.740125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.740153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.740294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.740320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.740482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.740508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.740625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.740661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.740774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.740801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.740966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.740996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.741149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.741178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.741332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.741359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.741501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.741528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.741641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.741669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.741784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.741810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.741979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.742005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.742120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.742145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.742282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.742309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.742420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.742446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.742613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.742648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.742790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.742818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.742989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.743016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.743135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.743160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.743295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.743325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.743458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.743485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.743622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.743655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.743764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.743790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.743908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.743935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.744107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.744133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.744241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.744269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-10-28 05:11:44.744413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-10-28 05:11:44.744440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.744572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.744599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.744744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.744771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.744919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.744945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.745106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.745132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.745276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.745302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.745465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.745491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.745602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.745630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.745791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.745834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.745962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.745993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.746175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.746220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.746358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.746387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.746550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.746576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.746707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.746752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.746923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.746951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.747109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.747154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.747319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.747346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.747497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.747523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.747692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.747722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.747899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.747946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.748095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.748138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.748305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.748331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.748470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.748496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.748644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.748671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.748865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.748894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.749066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.749110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.749247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.749272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.749383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.749408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.749554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.749581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.749718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.749762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.749914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.749956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.750093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.750136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.750277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.750304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.750416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.750448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.750589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-10-28 05:11:44.750616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-10-28 05:11:44.750778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.750822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.750998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.751046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.751212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.751239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.751401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.751427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.751564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.751591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.751786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.751831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.751962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.752005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.752142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.752185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.752350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.752377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.752541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.752568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.752727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.752771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.752926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.752970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.753125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.753168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.753340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.753366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.753530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.753556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.753746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.753791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.753984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.754027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.754189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.754234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.754370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.754396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.754537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.754564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.754689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.754745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.754901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.754946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.755101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.755130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.755314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.755341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.755487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.755513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.755632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.755664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.755830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.755875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.756033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.756077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.756220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.756248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.756413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.756439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.756571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.756597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.756741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.756772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.756925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.756972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.757130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.757175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.757312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.757338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.757503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.757530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.757687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.757717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.757890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.757934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.758121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.758172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.758337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-10-28 05:11:44.758363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-10-28 05:11:44.758482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.758508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.758648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.758675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.758836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.758881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.759042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.759087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.759225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.759251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.759380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.759407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.759524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.759550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.759708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.759754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.759945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.759974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.760168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.760195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.760361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.760388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.760552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.760578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.760749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.760794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.760966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.760994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.761146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.761175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.761326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.761352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.761466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.761493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.761631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.761670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.761868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.761916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.762077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.762120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.762294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.762320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.762464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.762490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.762607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.762640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.762799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.762843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.762980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.763024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.763246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.763292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.763454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.763486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.763651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.763681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.763802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.763832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.763983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.764013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.764173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.764205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.764382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.764429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.764570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.764596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.764793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.764837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.764972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.765017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.765155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.765182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.765300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.765327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.765459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.765487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.765607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.765639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-10-28 05:11:44.765785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-10-28 05:11:44.765813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.765954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.765981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.766117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.766144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.766260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.766287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.766430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.766458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.766592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.766619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.766796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.766826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.766981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.767014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.767174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.767203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.767334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.767363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.767533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.767560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.767704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.767733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.767874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.767918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.768088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.768118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.768298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.768342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.768448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.768475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.768589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.768615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.768818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.768861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.769049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.769094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.769221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.769250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.769415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.769442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.769571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.769598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.769740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.769772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.769899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.769928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.770085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.770115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.770246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.770274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.770454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.770489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.770654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.770698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.770835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.770865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.771045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.771074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.771218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.771246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.771397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.771428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.771595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.771621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.771816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.771860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.772012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.772041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.772218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.772261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.772420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.772465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.772570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.772597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.772744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.772789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.772897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.772925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.773096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-10-28 05:11:44.773146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-10-28 05:11:44.773336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.773379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.773503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.773529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.773718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.773764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.773904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.773947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.774121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.774165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.774313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.774338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.774502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.774529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.774646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.774673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.774811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.774855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.775011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.775056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.775210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.775236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.775380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.775406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.775552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.775579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.775722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.775769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.775989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.776033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.776171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.776202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.776373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.776400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.776535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.776562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.776695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.776722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.776837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.776866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.777022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.777055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.777212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.777241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.777393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.777422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.777592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.777618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.777767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.777792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.777904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.777931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.778100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.778148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.778338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.778382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.778534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.778561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.778733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.778760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.778949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.778976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.779109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.779153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.779316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.779358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.779497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.779523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.779631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.779670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-10-28 05:11:44.779806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-10-28 05:11:44.779832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.779977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.780003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.780165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.780192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.780326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.780353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.780505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.780532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.780646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.780674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.780814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.780841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.780974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.781018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.781158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.781184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.781293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.781319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.781464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.781490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.781639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.781666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.781820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.781866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.782028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.782072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.782262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.782307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.782454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.782481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.782651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.782678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.782866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.782920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.783084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.783129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.783279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.783331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.783470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.783496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.783607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.783641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.783832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.783877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.784018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.784061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.784241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.784268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.784430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.784456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.784594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.784626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.784774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.784801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.784913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.784940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.785088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.785115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.785252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.785278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.785403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.785429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.785570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.785598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.785761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.785787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.785897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.785924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.786094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.786121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.786258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.786284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.786398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.786425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.786543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.786570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.786720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.786766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-10-28 05:11:44.786879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-10-28 05:11:44.786905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.787055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.787083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.787194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.787221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.787368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.787394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.787544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.787570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.787713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.787754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.787914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.787956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.788135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.788164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.788305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.788332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.788441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.788468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.788582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.788608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.788767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.788794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.788929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.788954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.789080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.789106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.789225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.789252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.789404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.789430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.789540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.789565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.789741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.789773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.789945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.789973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.790118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.790147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.790276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.790306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.790457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.790486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.790662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.790709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.790847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.790874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.791015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.791045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.791196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.791225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.791380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.791410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.791542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.791569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.791702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.791730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.791854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.791882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.792038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.792080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.792243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.792272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.792428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.792457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.792644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.792670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.792813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.792839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.792979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.793020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.793194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.793222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.793380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.793411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.793552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.793578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.793705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.793731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-10-28 05:11:44.793836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-10-28 05:11:44.793863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.794017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.794043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.794192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.794227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.794377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.794406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.794579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.794606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.794724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.794752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.794858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.794883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.795086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.795115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.795271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.795300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.795448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.795481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.795618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.795664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.795830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.795856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.796039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.796067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.796220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.796249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.796425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.796454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.796602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.796630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.796799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.796825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.796975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.797002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.797148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.797180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.797366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.797396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.797550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.797576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.797695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.797722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.797842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.797868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.797985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.798011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.798131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.798158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.798331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.798360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.798546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.798575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.798742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.798768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.798912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.798938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.799144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.799173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.799320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.799350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.799513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.799540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.799691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.799717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.799855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.799881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.800047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.800078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.800230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.800259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.800406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.800434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.800606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.800632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.800783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.800809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.800976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.801002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-10-28 05:11:44.801220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-10-28 05:11:44.801249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.801442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.801471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.801619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.801654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.801778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.801804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.801981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.802007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.802141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.802170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.802347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.802377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.802534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.802561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.802690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.802717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.802831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.802857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.802979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.803008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.803144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.803188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.803318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.803347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.803498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.803527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.803693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.803720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.803836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.803861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.804015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.804045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.804199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.804228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.804374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.804407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.804565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.804594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.805609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.805652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.805823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.805850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.806014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.806042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.806179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.806208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.806390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.806419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.806581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.806607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.806755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.806782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.806944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.806972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.807146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.807175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.807320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.807349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.807514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.807543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.807705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.807732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.807854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.807880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.808024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.808049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.808184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.808213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.808392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.808430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.808586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.808614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.808777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.808802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.808912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.808938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.809116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.809159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.809320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.809350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.809502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.809528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.809670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.809696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-10-28 05:11:44.809828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-10-28 05:11:44.809854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.810001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.810027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.810186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.810219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.810399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.810428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.810595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.810621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.810762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.810788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.810906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.810948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.811085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.811114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.811264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.811293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.811454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.811482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.811614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.811649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.811781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.811808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.811911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.811937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.812116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.812146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.812296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.812336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.812463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.812492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.812648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.812691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.812845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.812870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.813020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.813049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.813229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.813257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.813418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.813446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.813593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.813622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.813786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.813812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.813922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.813956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.814097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.814130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-10-28 05:11:44.814282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-10-28 05:11:44.814309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.814458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.814487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.814604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.814652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.814778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.814804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.814969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.815002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.815158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.815187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.815347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.815376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.815528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.815557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.815721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.815748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.815862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.815888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.816045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.816073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.816224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.816253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.816377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.816406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.816531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.816569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.816746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.816773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.816944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.816970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.817132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.817161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.817312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.817349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.817513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.817542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.817721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.817761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.817888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.817917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.818103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.818147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.818303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.818347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.818514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.818541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.818666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.818694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.818826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.818856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.818994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.819020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.819168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.819194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.819356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.819385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.819564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.819593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.819733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.819759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.819947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.819980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.820156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.820185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.820309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.820337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.820496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.820521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.820659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.820685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.820844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.820871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.821017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.821044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.821194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.821222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.821364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.821391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.821560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.821585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.821711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.821735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.821865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-10-28 05:11:44.821890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-10-28 05:11:44.822032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.822057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.822247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.822274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.822451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.822479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.822671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.822697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.822814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.822838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.823002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.823030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.823175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.823204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.823421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.823453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.823608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.823642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.823814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.823841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.823994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.824036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.824243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.824288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.824442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.824488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.824659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.824687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.824836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.824863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.825034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.825107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.825263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.825314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.825428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.825455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.825584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.825611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.825807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.825852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.826027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.826081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.826257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.826287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.826464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.826491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.826605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.826631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.826783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.826811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.826918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.826944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.827051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.827077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.827257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.827325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.827449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.827476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.827627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.827661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.827777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.827804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.827977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.828013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.828188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.828215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.828672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.828703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.828851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.828880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.829003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.829032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-10-28 05:11:44.829152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-10-28 05:11:44.829187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.829649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.829691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.829899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.829977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.830199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.830242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.830447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.830479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.830616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.830677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.830831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.830866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.831009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.831037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.831169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.831198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.831353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.831382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.831526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.831569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.831759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.831788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.831980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.832010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.832145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.832175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.832318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.832347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.832522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.832548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.832703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.832730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.832880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.832906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.833036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.833065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.833205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.833234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.833400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.833429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.833578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.833608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.833757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.833789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.833899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.591 [2024-10-28 05:11:44.833926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.591 qpair failed and we were unable to recover it. 00:35:54.591 [2024-10-28 05:11:44.834735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.834767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.834939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.834985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.835125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.835153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.835292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.835320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.835443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.835470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.835613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.835657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.835799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.835825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.835945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.835984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.836110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.836137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.836282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.836309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.836414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.836441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.836596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.836622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.836760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.836786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.836917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.836946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.837120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.837150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.837333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.837362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.837522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.837550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.837704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.837732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.837893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.837922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.838125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.838168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.838334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.838379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.838543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.838569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.838753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.838798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.838968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.839019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.839218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.839263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.839406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.839434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.839575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.839603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.839774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.839819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.839967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.840011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.840157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.840184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.840291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.840318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.840474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.840514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.840668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.840697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.840810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.840837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.841003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.841042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.841209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.841242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.841376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.841408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.841558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.841584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.841701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.841729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.841867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.592 [2024-10-28 05:11:44.841897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.592 qpair failed and we were unable to recover it. 00:35:54.592 [2024-10-28 05:11:44.842082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.842128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.842293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.842338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.842454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.842481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.842597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.842624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.842783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.842809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.842922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.842961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.843086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.843114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.843258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.843284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.843459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.843486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.843611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.843651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.843778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.843805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.843921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.843947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.844095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.844121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.844265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.844292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.844436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.844462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.844609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.844642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.844757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.844784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.844893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.844919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.845037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.845063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.845211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.845237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.845407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.845434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.845545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.845573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.845701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.845745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.845893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.845922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.846050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.846084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.846247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.846297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.846427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.846459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.846650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.846678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.846806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.846833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.846973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.847018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.847181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.847226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.847386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.847435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.847597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.847623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.847781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.847808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.847953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.847982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.848160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.848205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.848348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.848380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.848489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.848516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.848626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.848658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.848767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.593 [2024-10-28 05:11:44.848811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.593 qpair failed and we were unable to recover it. 00:35:54.593 [2024-10-28 05:11:44.848938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.848967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.849126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.849161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.849318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.849347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.849510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.849540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.849714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.849740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.849876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.849924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.850041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.850067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.850231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.850275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.850395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.850422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.850537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.850563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.850704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.850731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.850871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.850899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.851016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.851042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.851166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.851192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.851317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.851346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.851483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.851509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.851621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.851653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.851790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.851836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.851994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.852039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.852177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.852221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.852390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.852435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.852576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.852603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.852771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.852816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.852972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.852999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.853148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.853175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.853284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.853311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.853456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.853484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.853601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.853627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.853747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.853773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.853905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.853935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.854119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.854148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.854279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.854308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.854492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.854524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.854666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.854694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.854827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.854872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.855003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.855047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.855199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.855243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.855415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.855442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.855588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.855615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.855754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.594 [2024-10-28 05:11:44.855798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.594 qpair failed and we were unable to recover it. 00:35:54.594 [2024-10-28 05:11:44.855959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.856003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.856162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.856207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.856371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.856398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.856539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.856566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.856734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.856782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.856902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.856928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.857097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.857124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.857266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.857292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.857456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.857482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.857648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.857693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.857841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.857886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.858019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.858062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.858238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.858264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.858430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.858457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.858567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.858593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.858731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.858777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.858885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.858912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.859076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.859121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.859237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.859263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.859428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.859454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.859621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.859657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.859797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.859841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.860011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.860055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.860180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.860212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.860363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.860389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.860506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.860532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.860678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.860709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.860893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.860938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.861075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.861125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.861267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.861294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.861464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.861491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.861662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.861691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.861819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.595 [2024-10-28 05:11:44.861865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.595 qpair failed and we were unable to recover it. 00:35:54.595 [2024-10-28 05:11:44.862038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.862083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.862227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.862271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.862453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.862481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.862658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.862685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.862826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.862872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.862983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.863009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.863181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.863207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.863373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.863399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.863522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.863548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.863687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.863724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.863850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.863878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.864017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.864044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.864197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.864224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.864360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.864388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.864531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.864557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.864680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.864709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.864822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.864848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.865030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.865060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.865269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.865301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.865482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.865513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.865697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.865733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.865884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.865917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.866114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.866157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.866380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.866430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.866592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.866621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.866748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.866775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.866895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.866938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.867109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.867139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.867282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.867329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.867468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.867496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.867663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.867695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.867822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.867849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.867989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.868026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.868216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.868266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.868447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.868495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.868686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.868714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.868858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.868883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.869050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.869076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.869226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.869270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.869519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.596 [2024-10-28 05:11:44.869551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.596 qpair failed and we were unable to recover it. 00:35:54.596 [2024-10-28 05:11:44.869682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.869723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.869863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.869889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.870034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.870064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.870245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.870293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.870449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.870478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.870611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.870643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.870782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.870810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.870936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.870962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.871094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.871120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.871271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.871297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.871441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.871466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.871606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.871639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.871755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.871781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.871886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.871922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.872068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.872096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.872272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.872300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.872445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.872474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.872640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.872671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.872787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.872812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.872945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.872971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.873117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.873144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.873264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.873289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.873425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.873453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.873623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.873655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.873766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.873792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.873960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.873997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.874150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.874179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.874365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.874394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.874537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.874563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.874706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.874733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.874853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.874878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.875029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.875056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.875205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.875235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.875418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.875447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.875583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.875608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.875728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.875754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.875897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.875923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.876039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.876064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.876271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.876300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.876457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.597 [2024-10-28 05:11:44.876486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.597 qpair failed and we were unable to recover it. 00:35:54.597 [2024-10-28 05:11:44.876671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.876715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.876855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.876880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.877041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.877068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.877209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.877235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.877423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.877451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.877599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.877627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.877774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.877801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.877990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.878019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.878201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.878247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.878428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.878456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.878625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.878656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.878799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.878825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.878965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.878990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.879217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.879254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.879420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.879449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.879624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.879659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.879797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.879823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.880008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.880040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.880264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.880322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.880552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.880578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.880708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.880733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.880891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.880921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.881138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.881191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.881334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.881362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.881518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.881547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.881705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.881732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.881838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.881864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.881983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.882009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.882154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.882183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.882321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.882346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.882484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.882514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.882685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.882712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.882822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.882847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.882971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.882996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.883132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.883159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.883299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.883325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.883462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.883487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.883694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.883721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.883864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.883890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.884010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.598 [2024-10-28 05:11:44.884036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.598 qpair failed and we were unable to recover it. 00:35:54.598 [2024-10-28 05:11:44.884199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.884225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.884388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.884415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.884578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.884604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.884755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.884781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.884917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.884944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.885082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.885109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.885218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.885244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.885406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.885433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.885570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.885597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.885754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.885780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.885921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.885946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.886113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.886139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.886300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.886325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.886496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.886521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.886676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.886702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.886819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.886844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.886996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.887023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.887164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.887195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.887331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.887358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.887505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.887530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.887667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.887706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.887818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.887845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.887960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.887986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.888128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.888153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.888293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.888318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.888483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.888510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.888648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.888674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.888794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.888820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.888928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.888954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.889095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.889120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.889256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.889282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.889429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.889456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.889570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.889598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.889734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.889760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.889897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.889925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.890065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.890092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.890238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.890263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.890401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.890428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.890594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.890621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.890744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.599 [2024-10-28 05:11:44.890770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.599 qpair failed and we were unable to recover it. 00:35:54.599 [2024-10-28 05:11:44.890907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.890933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.891082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.891109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.891265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.891291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.891455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.891491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.891622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.891669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.891812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.891839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.891991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.892017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.892120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.892146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.892332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.892357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.892470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.892496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.892609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.892642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.892764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.892790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.892911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.892937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.893085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.893110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.893243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.893268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.893405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.893431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.893572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.893599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.893741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.893767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.893905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.893931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.894046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.894073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.894216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.894241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.894406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.894432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.894540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.894565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.894714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.894741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.894880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.894906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.895034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.895059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.895195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.895221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.895336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.895362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.895505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.895531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.895686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.895713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.895885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.895911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.896082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.896107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.896246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.896272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.896415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.896440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.896573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.896598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.600 qpair failed and we were unable to recover it. 00:35:54.600 [2024-10-28 05:11:44.896726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.600 [2024-10-28 05:11:44.896752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.896917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.896942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.897055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.897080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.897240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.897266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.897404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.897429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.897569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.897594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.897714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.897740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.897874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.897900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.898041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.898068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.898213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.898244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.898407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.898432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.898547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.898574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.898685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.898710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.898851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.898878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.899051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.899087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.899220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.899246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.899387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.899414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.899559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.899586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.899741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.899767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.899886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.899913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.900026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.900052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.900170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.900195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.900356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.900382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.900552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.900577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.900739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.900765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.900883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.900909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.901023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.901049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.901213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.901239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.901402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.901429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.901560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.901585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.901711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.901737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.901857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.901883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.902027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.902052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.902187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.902214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.902376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.902401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.902543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.902568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.902722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.902749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.902888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.902913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.903032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.903057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.903194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.601 [2024-10-28 05:11:44.903220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.601 qpair failed and we were unable to recover it. 00:35:54.601 [2024-10-28 05:11:44.903331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.903357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.903524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.903549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.903676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.903702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.903817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.903843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.903957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.903984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.904099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.904126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.904295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.904320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.904460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.904485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.904600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.904626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.904747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.904779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.904945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.904971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.905141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.905167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.905305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.905330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.905484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.905514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.905678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.905706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.905851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.905893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.906052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.906080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.906213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.906238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.906398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.906424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.906588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.906613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.906748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.906773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.906930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.906958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.907116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.907144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.907333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.907358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.907496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.907522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.907644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.907669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.907860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.907885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.908028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.908053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.908217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.908261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.908415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.908440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.908576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.908603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.908803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.908846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.909012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.909040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.909228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.909258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.909415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.909443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.909619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.909656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.909791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.909818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.910014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.910042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.910208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.910233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.910399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.602 [2024-10-28 05:11:44.910425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.602 qpair failed and we were unable to recover it. 00:35:54.602 [2024-10-28 05:11:44.910591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.910617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.910746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.910773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.910944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.910973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.911116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.911145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.911300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.911326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.911469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.911511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.911643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.911672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.911833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.911858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.912011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.912036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.912169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.912195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.912347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.912373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.912537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.912565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.912762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.912802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.912952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.912981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.913123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.913149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.913280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.913305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.913473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.913499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.913679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.913706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.913814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.913840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.913982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.914008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.914146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.914171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.914308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.914335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.914470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.914496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.914644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.914670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.914804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.914834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.915002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.915028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.915163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.915189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.915348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.915379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.915533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.915559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.915684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.915710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.915850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.915875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.916014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.916040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.916174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.916199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.916364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.916389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.916555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.916581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.916727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.916754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.916886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.916915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.917060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.917086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.917195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.917220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.917361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.603 [2024-10-28 05:11:44.917387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.603 qpair failed and we were unable to recover it. 00:35:54.603 [2024-10-28 05:11:44.917522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.917548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.917682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.917708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.917838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.917863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.918028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.918054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.918187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.918213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.918350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.918376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.918536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.918564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.918761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.918788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.918935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.918961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.919107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.919134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.919306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.919348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.919486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.919511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.919680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.919706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.919888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.919918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.920072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.920102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.920284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.920309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.920451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.920477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.920627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.920676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.920823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.920850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.920990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.921016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.921180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.921206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.921326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.921353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.921519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.921561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.921718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.921749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.921902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.921929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.922040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.922065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.922207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.922233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.922331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.922356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.924773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.924800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.924916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.924942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.925107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.925134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.925271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.925296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.925459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.925485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.925645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.925689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.925835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.925860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.604 [2024-10-28 05:11:44.926001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.604 [2024-10-28 05:11:44.926026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.604 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.926162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.926193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.926330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.926356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.926571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.926596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.926713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.926740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.926907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.926951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.927104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.927132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.927283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.927309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.927448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.927474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.927623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.927663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.927827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.927852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.927965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.927991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.928132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.928158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.928307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.928334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.928472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.928498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.928646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.928673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.928810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.928836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.928984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.929009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.929166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.929195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.929352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.929378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.929513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.929538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.929696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.929722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.929857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.929883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.930051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.930077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.930215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.930240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.930351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.930378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.930518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.930544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.930706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.930735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.930878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.930904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.931045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.931070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.931260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.931288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.931477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.931502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.931618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.931681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.931837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.931865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.932001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.932026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.932172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.932198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.932331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.932356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.932509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.932535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.932710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.932736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.932900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.932943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.605 qpair failed and we were unable to recover it. 00:35:54.605 [2024-10-28 05:11:44.933125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.605 [2024-10-28 05:11:44.933150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.933287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.933317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.933480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.933509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.933693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.933719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.933858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.933883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.934056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.934085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.934225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.934251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.934391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.934416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.934546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.934572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.934715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.934741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.934904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.934930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.935089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.935117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.935300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.935326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.935462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.935502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.935651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.935679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.935839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.935866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.936029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.936054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.936194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.936219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.936337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.936363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.936501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.936525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.936715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.936754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.936898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.936926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.937080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.937110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.937260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.937289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.937451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.937477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.937584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.937609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.937797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.937826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.937982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.938008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.938169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.938205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.938330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.938359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.938479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.938522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.938690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.938717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.938852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.938878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.939056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.939081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.939195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.939237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.939390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.939420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.939594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.939620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.939768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.939794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.939910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.939937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.940076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.940102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.606 [2024-10-28 05:11:44.940255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.606 [2024-10-28 05:11:44.940283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.606 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.940461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.940489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.940645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.940672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.940777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.940820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.940968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.940997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.941155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.941180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.941361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.941389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.941538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.941567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.941766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.941792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.941942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.941971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.942123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.942149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.942276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.942302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.942417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.942444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.942628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.942660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.942771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.942797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.942903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.942935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.943126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.943152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.943283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.943308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.943414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.943440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.943647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.943674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.943835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.943861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.944017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.944045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.944181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.944225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.944362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.944391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.944526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.944555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.944749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.944776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.944914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.944940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.945073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.945099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.945234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.945259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.945430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.945456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.945560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.945585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.945733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.945759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.945901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.945928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.946037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.946063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.946199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.946225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.946366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.946393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.946531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.946576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.946729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.946758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.946920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.946945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.947100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.947130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.607 [2024-10-28 05:11:44.947246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.607 [2024-10-28 05:11:44.947276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.607 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.947424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.947450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.947593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.947628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.947779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.947805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.947943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.947970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.948081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.948125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.948250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.948279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.948436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.948461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.948596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.948628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.948773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.948799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.948912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.948937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.949044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.949070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.949246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.949272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.949413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.949439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.949551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.949578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.949718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.949749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.949882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.949907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.950046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.950072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.950212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.950238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.950404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.950430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.950566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.950593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.950734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.950761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.950895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.950921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.951064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.951089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.951251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.951277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.951415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.951442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.951559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.951584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.951723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.951750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.951894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.951919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.952094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.952119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.952233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.952259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.952399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.952426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.952563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.952589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.952763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.952802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.952966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.953005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.608 [2024-10-28 05:11:44.953197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.608 [2024-10-28 05:11:44.953241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.608 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.953368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.953399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.953550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.953576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.953736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.953765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.953933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.953960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.954094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.954121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.954254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.954281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.954417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.954449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.954591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.954617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.954780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.954809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.954991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.955017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.955176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.955221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.955386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.955413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.955551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.955580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.955749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.955776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.955918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.955943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.956070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.956098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.956255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.956283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.956431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.956459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.956613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.956644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.956755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.956780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.956952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.956977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.957138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.957166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.957341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.957370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.957546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.957574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.957750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.957775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.957887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.957913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.958078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.958103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.958251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.958280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.958426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.958455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.958631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.958689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.958828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.958853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.959041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.959069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.959254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.959283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.959470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.959516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.959684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.959712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.959874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.959922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.960042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.960086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.960277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.960321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.960439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.960465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.609 [2024-10-28 05:11:44.960628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.609 [2024-10-28 05:11:44.960660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.609 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.960796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.960822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.961006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.961049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.961211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.961254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.961410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.961454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.961594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.961624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.961770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.961796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.961957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.962008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.962196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.962225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.962379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.962406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.962569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.962595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.962742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.962786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.962916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.962961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.963105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.963150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.963292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.963318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.963470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.963496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.963631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.963666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.963857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.963904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.964094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.964124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.964270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.964318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.964432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.964459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.964617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.964683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.964817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.964849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.964973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.965003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.965213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.965243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.965392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.965421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.965651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.965699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.965863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.965894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.966077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.966106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.966285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.966314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.966517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.966556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.966673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.966702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.966859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.966904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.967121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.967148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.967342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.967387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.967534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.967560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.967674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.967705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.967917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.967960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.968129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.968157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.968293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.968322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.610 qpair failed and we were unable to recover it. 00:35:54.610 [2024-10-28 05:11:44.968510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.610 [2024-10-28 05:11:44.968537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.968708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.968736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.968885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.968911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.969080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.969110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.969262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.969291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.969440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.969469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.969646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.969673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.969812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.969855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.969997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.970039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.970251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.970296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.970457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.970504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.970622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.970662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.970803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.970831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.970970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.970996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.971101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.971127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.971273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.971301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.971471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.971500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.971616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.971650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.971787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.971816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.971925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.971951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.972090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.972116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.972296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.972330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.972457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.972486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.972606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.972642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.972789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.972818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.972939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.972970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.973132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.973162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.973347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.973392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.973536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.973563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.973738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.973768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.973940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.973982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.974143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.974173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.974353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.974397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.974551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.974579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.974722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.974753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.974912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.974941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.975082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.975115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.975295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.975324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.975474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.975503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.975645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.975673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.611 [2024-10-28 05:11:44.975831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.611 [2024-10-28 05:11:44.975875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.611 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.976038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.976082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.976286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.976330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.976468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.976495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.976647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.976688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.976802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.976830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.977017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.977046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.977221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.977250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.977421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.977451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.977626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.977684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.977827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.977853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.978036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.978065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.978178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.978206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.978404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.978437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.978597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.978648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.978793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.978821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.978987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.979015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.979139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.979168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.979344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.979374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.979524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.979563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.979701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.979731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.979869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.979900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.980068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.980097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.980247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.980275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.980460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.980489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.980695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.980725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.980878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.980905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.981109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.981141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.981297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.981326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.981472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.981516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.981663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.981689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.981865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.981904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.982142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.982173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.982362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.982392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.982505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.982535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.982690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.982730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.982904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.982932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.983087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.983131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.983296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.983340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.983502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.983546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.612 [2024-10-28 05:11:44.983714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.612 [2024-10-28 05:11:44.983742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.612 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.983879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.983906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.984025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.984052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.984193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.984219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.984332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.984358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.984463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.984489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.984632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.984668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.984820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.984848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.984993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.985027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.985151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.985178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.985297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.985325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.985494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.985520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.985624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.985658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.985797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.985826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.985982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.986013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.986178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.986207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.986361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.986391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.986584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.986611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.986738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.986765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.986873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.986900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.987046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.987090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.987210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.987238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.987466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.987495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.987648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.987676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.987817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.987845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.987992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.988021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.988192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.988221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.988399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.988427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.988590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.988618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.988765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.988793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.988910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.988936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.989135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.989164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.989283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.989320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.989467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.989496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.989683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.989722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.989882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.989933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.613 [2024-10-28 05:11:44.990065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.613 [2024-10-28 05:11:44.990109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.613 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.990276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.990302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.990414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.990441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.990584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.990610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.990748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.990774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.990920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.990951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.991113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.991158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.991342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.991386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.991553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.991580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.991735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.991780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.991947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.991990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.992181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.992225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.992339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.992368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.992549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.992576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.992749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.992794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.992965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.993009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.993130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.993174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.993352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.993379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.993530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.993557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.993764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.993810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.993985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.994016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.994173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.994202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.994357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.994389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.994581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.994613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.994782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.994812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.995001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.995046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.995238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.995283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.995422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.995449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.995554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.995582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.995780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.995827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.995991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.996035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.996195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.996240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.996407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.996434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.996565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.996591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.996782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.996826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.996979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.997023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.997143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.997169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.997347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.997373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.997547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.997574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.997729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.997778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.614 qpair failed and we were unable to recover it. 00:35:54.614 [2024-10-28 05:11:44.997979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.614 [2024-10-28 05:11:44.998023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:44.998204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:44.998248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:44.998412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:44.998439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:44.998574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:44.998601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:44.998829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:44.998872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:44.999069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:44.999115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:44.999272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:44.999318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:44.999464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:44.999490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:44.999653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:44.999681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:44.999794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:44.999820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:44.999948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:44.999974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.000115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.000142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.000252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.000280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.000424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.000450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.000618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.000651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.000792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.000819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.000966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.000992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.001162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.001188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.001296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.001323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.001496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.001522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.001690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.001717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.001861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.001888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.002001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.002027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.002162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.002189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.002329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.002357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.002497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.002530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.002702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.002729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.002908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.002938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.003079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.003106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.003244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.003271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.003382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.003408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.003570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.003597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.003744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.003772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.003916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.003943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.004091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.004118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.004233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.004260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.004380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.004407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.004558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.004584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.004750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.004787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.004925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.615 [2024-10-28 05:11:45.004956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.615 qpair failed and we were unable to recover it. 00:35:54.615 [2024-10-28 05:11:45.005127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.005154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.005291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.005328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.005445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.005472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.005611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.005654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.005797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.005830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.005983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.006009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.006126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.006153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.006324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.006350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.006471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.006498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.006649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.006677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.006832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.006876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.007072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.007103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.007255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.007282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.007403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.007429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.007593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.007620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.007767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.007794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.007908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.007935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.008082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.008108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.008275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.008302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.008440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.008467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.008627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.008677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.008826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.008855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.009003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.009029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.009193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.009223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.009378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.009407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.009546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.009575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.009756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.009807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.009941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.009985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.010145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.010190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.010321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.010347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.010483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.010509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.010645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.010672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.010827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.010871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.011063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.011092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.011310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.011359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.011535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.011562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.011734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.011778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.011921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.011951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.012171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.012215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.012356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.616 [2024-10-28 05:11:45.012382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.616 qpair failed and we were unable to recover it. 00:35:54.616 [2024-10-28 05:11:45.012526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.012552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.012727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.012774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.012933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.012967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.013131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.013160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.013348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.013377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.013554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.013584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.013728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.013758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.013949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.013995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.014183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.014212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.014360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.014404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.014579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.014606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.014754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.014798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.014968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.015013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.015131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.015158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.015294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.015321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.015462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.015489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.015640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.015667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.015806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.015832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.015980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.016007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.016118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.016148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.016293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.016319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.016484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.016514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.016654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.016681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.016826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.016854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.017001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.017027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.017188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.017217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.017368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.017404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.017574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.017601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.017767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.017796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.017984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.018029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.018162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.018206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.018365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.018395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.018604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.018630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.617 [2024-10-28 05:11:45.018745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.617 [2024-10-28 05:11:45.018773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.617 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.018911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.018955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.019122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.019165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.019328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.019374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.019522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.019550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.019692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.019719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.019866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.019910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.020068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.020098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.020257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.020286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.020464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.020494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.020651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.020678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.020816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.020842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.021023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.021053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.021219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.021251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.021404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.021433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.021571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.021598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.021750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.021777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.021931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.021960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.022140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.022172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.022397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.022426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.022570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.022600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.022751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.022779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.022949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.022975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.023110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.023141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.023346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.023375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.023627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.023678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.023845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.023872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.024038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.024070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.024281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.024312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.024473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.024503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.024700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.024727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.024876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.024912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.025042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.025072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.025207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.025237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.025420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.025449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.025576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.025602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.025774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.025800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.025945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.025972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.026168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.026200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.026360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.026405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.026539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.618 [2024-10-28 05:11:45.026582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.618 qpair failed and we were unable to recover it. 00:35:54.618 [2024-10-28 05:11:45.026723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.026752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.026897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.026923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.027090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.027122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.027300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.027329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.027509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.027538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.027716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.027756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.027903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.027930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.028097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.028142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.028301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.028351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.028492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.028524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.028685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.028713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.028855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.028882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.029026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.029070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.029244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.029271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.029449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.029477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.029630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.029667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.029859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.029889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.030038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.030067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.030214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.030244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.030360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.030390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.030588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.030616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.030765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.030792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.031031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.031061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.031228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.031271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.031415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.031442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.031590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.031617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.031763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.031789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.031926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.031972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.032136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.032179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.032302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.032330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.032470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.032497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.032646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.032672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.032805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.032848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.033040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.033088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.033198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.033226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.033388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.033428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.033576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.033603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.033718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.033745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.033886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.033912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.034067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-10-28 05:11:45.034096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-10-28 05:11:45.034275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.034305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.034459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.034486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.034622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.034662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.034807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.034834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.035006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.035053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.035217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.035263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.035421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.035466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.035629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.035679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.035877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.035922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.036067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.036112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.036246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.036290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.036410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.036435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.036606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.036638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.036771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.036815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.036971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.037019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.037195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.037222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.037356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.037382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.037502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.037528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.037650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.037677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.037793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.037820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.037974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.038007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.038147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.038173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.038309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.038336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.038456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.038481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.038652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.038679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.038818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.038844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.038961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.038986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.039105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.039131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.039294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.039320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.039453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.039479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.039620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.039658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.039847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.039893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.040080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.040124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.040272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.040299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.040458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.040498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.040649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.040678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.040823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.040850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.041014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.041044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.041197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.041227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-10-28 05:11:45.041377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-10-28 05:11:45.041406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.041558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.041586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.041703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.041730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.041894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.041924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.042061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.042088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.042296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.042325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.042443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.042473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.042645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.042673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.042832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.042871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.043111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.043154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.043386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.043421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.043643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.043689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.043830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.043856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.044032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.044061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.044286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.044335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.044519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.044548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.044713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.044740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.044929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.044961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.045112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.045141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.045349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.045380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.045533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.045562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.045702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.045729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.045876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.045902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.046089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.046119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.046276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.046305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.046446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.046475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.046624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.046680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.046813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.046840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.047042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.047101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.047347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.047392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.047498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.047525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.047698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.047726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.047878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.047922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.048050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.048076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.048190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.048217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.048371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.048407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-10-28 05:11:45.048602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-10-28 05:11:45.048641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.048792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.048836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.048997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.049024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.049222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.049267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.049428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.049480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.049621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.049655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.049795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.049822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.049978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.050007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.050169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.050198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.050352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.050382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.050515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.050544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.050686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.050714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.050878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.050925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.051131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.051183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.051350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.051403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.051558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.051587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.051792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.051819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.051952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.051982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.052139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.052170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.052354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.052403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.052557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.052585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.052748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.052774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.052937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.052967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.053147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.053176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.053321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.053367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.053510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.053542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.053756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.053796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.053916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.053944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.054118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.054164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.054355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.054399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.054562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.054588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.054704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.054732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.054890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.054932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.055114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.055144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.055406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.055462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.055645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.055672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.055809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.055835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.056010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.056037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.056257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.056289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.056417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.056447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-10-28 05:11:45.056628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-10-28 05:11:45.056664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.056811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.056841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.057016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.057060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.057289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.057342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.057530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.057580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.057733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.057761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.057898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.057923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.058078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.058118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.058277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.058306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.058514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.058573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.058747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.058775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.058892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.058918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.059075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.059104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.059335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.059364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.059506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.059549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.059708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.059734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.059846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.059874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.059987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.060029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.060185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.060214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.060446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.060476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.060645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.060690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.060824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.060850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.061018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.061075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.061219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.061265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.061395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.061440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.061577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.061603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.061764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.061808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.061969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.062000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.062189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.062230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.062367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.062396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.062541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.062570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.062751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.062780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.062942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.062971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.063125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.063152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.063301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.063329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.063477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.063507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.063650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.063676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.063810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.063837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.063972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.064002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.064151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.064179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-10-28 05:11:45.064360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-10-28 05:11:45.064389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.064525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.064563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.064715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.064743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.064897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.064943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.065111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.065156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.065315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.065359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.065524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.065550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.065691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.065719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.065907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.065953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.066086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.066130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.066265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.066292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.066419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.066446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.066551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.066579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.066744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.066792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.066914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.066944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.067126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.067152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.067318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.067346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.067479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.067510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.067646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.067672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.067864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.067893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.068011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.068041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.068198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.068226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.068358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.068385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.068547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.068572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.068690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.068716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.068845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.068875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.069055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.069088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.069244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.069273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.069447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.069476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.069628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.069680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.069788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.069813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.070002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.070031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.070178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.070209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.070356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.070386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.070550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.070577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.070732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.070757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.070892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.070926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.071081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.071110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.071227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.071256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.071435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-10-28 05:11:45.071464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-10-28 05:11:45.071602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.071628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.071802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.071828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.071995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.072020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.072148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.072178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.072291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.072319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.072513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.072538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.072708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.072735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.072906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.072932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.073068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.073093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.073251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.073279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.073401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.073431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.073587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.073613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.073736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.073766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.073956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.073987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.074134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.074178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.074332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.074376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.074517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.074543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.074658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.074686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.074840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.074884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.075073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.075117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.075274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.075320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.075460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.075488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.075651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.075696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.075853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.075881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.076087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.076117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.076236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.076265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.076458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.076492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.076670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.076699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.076899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.076943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.077067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.077096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.077299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.077343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.077482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.077508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.077645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.077672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.077840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.077871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.078030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.078059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.078200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.078228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.078360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.078385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.078529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.078555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.078722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.078748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.078906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-10-28 05:11:45.078943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-10-28 05:11:45.079076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.079106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.079264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.079293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.079491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.079517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.079630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.079662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.079823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.079851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.080031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.080059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.080233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.080261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.080409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.080438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.080599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.080628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.080777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.080803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.081002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.081046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.081179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.081224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.081384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.081427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.081576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.081602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.081814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.081845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.081984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.082010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.082184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.082210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.082373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.082402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.082558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.082586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.082762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.082789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.082978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.083006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.083126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.083155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.083305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.083334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.083494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.083522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.083671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.083698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.083890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.083944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.084073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.084121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.084297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.084324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.084466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.084494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.084676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.084707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.084881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.084910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.085092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.085118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.085258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.085284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.085431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.085457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.085620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.085652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-10-28 05:11:45.085788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-10-28 05:11:45.085835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.086028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.086071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.086249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.086277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.086439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.086465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.086573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.086599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.086795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.086840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.087036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.087064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.087239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.087283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.087446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.087473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.087609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.087647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.087801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.087844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.088001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.088043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.088207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.088250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.088392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.088418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.088532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.088559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.088706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.088750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.088892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.088918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.089062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.089088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.089261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.089288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.089424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.089450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.089627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.089658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.089787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.089832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.089994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.090042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.090233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.090262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.090417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.090443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.090605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.090631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.090795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.090839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.091001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.091046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.091237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.091281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.091421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.091449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.091557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.091584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.091775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.091824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.091983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.092027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.092213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.092257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.092397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.092424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.092591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.092617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.092815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.092859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.093015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.093059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.093195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.093242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.093382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-10-28 05:11:45.093408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-10-28 05:11:45.093511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.093538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.093698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.093744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.093905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.093948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.094098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.094142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.094286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.094313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.094457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.094484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.094621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.094653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.094781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.094825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.094985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.095028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.095191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.095217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.095323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.095350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.095520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.095546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.095736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.095780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.095917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.095960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.096121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.096164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.096305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.096332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.096497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.096523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.096697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.096742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.096906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.096942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.097088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.097126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.097277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.097304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.097443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.097470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.097630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.097664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.097767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.097812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.097939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.097968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.098117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.098145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.098302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.098331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.098488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.098513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.098651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.098678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.098816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.098843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.098999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.099028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.099250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.099279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.099412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.099442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.099620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.099653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.099793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.099819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.099984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.100013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.100134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.100163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.100378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.100407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.100584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.100613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.100746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-10-28 05:11:45.100772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-10-28 05:11:45.100886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.100928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.101082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.101111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.101294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.101344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.101480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.101525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.101716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.101742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.101882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.101912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.102028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.102071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.102244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.102272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.102400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.102429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.102593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.102619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.102768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.102794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.102997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.103023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.103163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.103189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.103422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.103451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.103605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.103644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.103832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.103858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.104059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.104085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.104265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.104294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.104441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.104471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.104711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.104738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.104879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.104906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.105061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.105091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.105272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.105301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.105444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.105473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.105591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.105620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.105827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.105866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.106063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.106110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.106266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.106310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.106420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.106447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.106591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.106618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.106804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.106851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.107012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.107041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.107241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.107289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.107400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.107426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.107566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.107593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.107735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.107785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.107917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.107960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.108117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.108161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.108295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.108322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.108451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.108477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-10-28 05:11:45.108614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-10-28 05:11:45.108649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.108784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.108813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.108986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.109029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.109215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.109258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.109397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.109423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.109530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.109557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.109737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.109782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.109970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.110015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.110199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.110242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.110379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.110405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.110567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.110593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.110755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.110800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.110972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.111015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.111177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.111219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.111390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.111415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.111578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.111605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.111798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.111842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.111987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.112029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.112230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.112273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.112417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.112443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.112551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.112577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.112766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.112794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.112897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.112924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.113083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.113129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.113294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.113321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.113457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.113483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.113640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.113688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.113845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.113889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.114055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.114100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.114228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.114271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.114404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.114430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.114598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.114624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.114795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.114844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.115005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.115051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.115240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.115283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.115446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-10-28 05:11:45.115472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-10-28 05:11:45.115593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.115619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.115768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.115794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.115907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.115933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.116097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.116123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.116283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.116310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.116424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.116450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.116618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.116653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.116817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.116843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.117000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.117043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.117229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.117271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.117442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.117469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.117601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.117628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.117824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.117868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.118004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.118047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.118205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.118248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.118390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.118416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.118579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.118605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.118835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.118879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.119070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.119114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.119278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.119321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.119425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.119452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.119616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.119654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.119843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.119886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.120116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.120160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.120301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.120344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.120483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.120509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.120649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.120676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.120862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.120905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.121072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.121119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.121321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.121359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.121554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.121585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.121743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.121775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.121912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.121942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.122102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.122132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.122292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.122323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.122485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.122515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.122648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.122686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.122831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.122862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.123052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.123082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-10-28 05:11:45.123238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-10-28 05:11:45.123268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.123432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.123462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.123596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.123626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.123834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.123867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.124030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.124058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.124170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.124196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.124338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.124366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.124515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.124541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.124714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.124741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.124845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.124871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.125039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.125065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.125208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.125233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.125384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.125409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.125576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.125601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.125771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.125798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.125934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.125961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.126099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.126124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.126269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.126294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.126436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.126461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.126566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.126591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.126719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.126745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.126857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.126883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.127028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.127053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.127215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.127240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.127409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.127435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.127542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.127567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.127728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.127754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.127894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.127919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.128056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.128081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.128198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.128224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.128360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.128385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.128518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.128544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.128686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.128712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.128854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.128879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.129020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.129045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.129188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.129213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.129332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.129358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.129488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.129518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.129672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.129700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.129839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.129864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-10-28 05:11:45.130003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-10-28 05:11:45.130028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.130142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.130167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.130274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.130299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.130434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.130477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.130631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.130687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.130828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.130854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.130996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.131021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.131160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.131185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.131347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.131372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.131482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.131508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.131620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.131655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.131795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.131820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.131974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.131999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.132173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.132198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.132359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.132384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.132491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.132517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.132656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.132683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.132796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.132823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.132996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.133022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.133131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.133156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.133319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.133344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.133504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.133530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.133681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.133708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.133852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.133878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.134053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.134079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.134239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.134264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.134400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.134426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.134553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.134579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.134727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.134754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.134897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.134923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.135036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.135062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.135175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.135200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.135333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.135358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.135472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.135498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.135645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.135672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.135810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.135835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.135970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.135995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.136129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.136159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.136296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.136323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-10-28 05:11:45.136436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-10-28 05:11:45.136461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.136619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.136655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.136792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.136818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.136963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.136989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.137105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.137130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.137264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.137289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.137430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.137457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.137597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.137622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.137768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.137794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.137927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.137953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.138094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.138120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.138255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.138280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.138396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.138423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.138562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.138588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.138736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.138762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.138903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.138928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.139071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.139097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.139237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.139263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.139405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.139430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.139568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.139594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.139768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.139794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.139938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.139963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.140075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.140100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.140270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.140295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.140457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.140483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.140626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.140657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.140825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.140850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.140985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.141011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.141171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.141196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.141305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.141330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.141441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.141466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.141629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.141660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.141818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.141844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.141948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.141974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.142090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.142116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.142259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.142285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.142425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.142452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.142594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.142619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.142765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-10-28 05:11:45.142795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-10-28 05:11:45.142901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.142927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.143066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.143091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.143210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.143235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.143398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.143424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.143591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.143616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.143773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.143800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.143945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.143970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.144110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.144136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.144280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.144307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.144407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.144433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.144573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.144599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.144742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.144768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.144907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.144932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.145097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.145123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.145257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.145282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.145398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.145423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.145562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.145587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.145707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.145732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.145869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.145894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.146037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.146062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.146167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.146192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.146355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.146381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.146518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.146543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.146677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.146704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.146844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.146869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.147009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.147034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.147177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.147204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.147341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.147367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-10-28 05:11:45.147529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-10-28 05:11:45.147554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.147700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.147727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.147871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.147898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.148044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.148069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.148203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.148228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.148392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.148417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.148584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.148609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.148756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.148782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.148916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.148941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.149077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.149102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.149209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.149236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.149374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.149405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.149547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.149572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.149721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.149747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.149912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.149938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.150067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.150092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.150255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.150281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.150418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.150443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.150610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.150640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.150778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.150803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.150928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.150953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.151118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.151143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.151274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.151299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.151415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.151440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.151577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.151602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.151783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.151809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.151921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.151946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.152057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.152082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.152221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.152246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.152409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.152434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.152598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.152624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.152772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.152798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.152945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.152970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.153100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.153125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.153254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.153279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.153393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.153417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.153527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.153553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.153666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.153692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.153862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.153887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.154028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.154053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.154193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.154219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.154384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-10-28 05:11:45.154409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-10-28 05:11:45.154522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.154547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.154713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.154739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.154902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.154928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.155090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.155115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.155252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.155278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.155384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.155409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.155568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.155593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.155748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.155774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.155913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.155938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.156051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.156084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.156250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.156288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.156440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.156467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.156606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.156632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.156759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.156785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.156926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.156952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.157118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.157145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.157279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.157305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.157463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.157489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.157643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.157671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.157805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.157830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.157990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.158016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.158156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.158181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.158333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.158359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.158496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.158522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.158644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.158670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.158785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.158810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.158923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.158949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.159086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.159114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.159296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.159323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.159492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.159519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.159681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.159708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-10-28 05:11:45.159819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-10-28 05:11:45.159845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.159987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.160013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.160117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.160144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.160289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.160315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.160501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.160527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.160683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.160709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.160854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.160879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.161018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.161043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.161207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.161233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.161337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.161363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.161499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.161531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.161697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.161724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.161835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.161861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.161979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.162006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.162119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.162145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.162256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.162282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.162431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.162457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.162566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.162592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.162715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.162742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.162887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.162913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.163078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.163104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.163215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.163241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.163354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.163380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.163495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.163521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.163688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.163715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.163826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.163852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.922 [2024-10-28 05:11:45.164017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.922 [2024-10-28 05:11:45.164043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.922 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.164152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.164178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.164308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.164334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.164471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.164496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.164643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.164669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.164782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.164808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.164938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.164964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.165101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.165144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.165283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.165311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.165440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.165469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.165653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.165701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.165820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.165845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.165978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.166004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.166160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.166188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.166350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.166376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.166536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.166561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.166726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.166752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.166914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.166956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.167108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.167133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.167277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.167303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.167441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.167471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.167607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.167640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.167805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.167830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.167968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.167994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.168104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.168129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.168267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.168292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.168458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.168484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.168622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.168671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.168851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.168877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.169042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.169067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.169200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.169226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.169335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.169361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.169495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.169521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.169673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.169703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.169839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.169865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.170004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.170030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.170202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.170229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.170341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.170367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.170524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.170553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.170728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.170754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.923 qpair failed and we were unable to recover it. 00:35:54.923 [2024-10-28 05:11:45.170885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.923 [2024-10-28 05:11:45.170910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.171092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.171121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.171278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.171303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.171406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.171431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.171595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.171623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.171747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.171776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.171935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.171961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.172142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.172175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.172323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.172352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.172540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.172566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.172667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.172694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.172833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.172859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.172990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.173018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.173198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.173226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.173416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.173441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.173595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.173623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.173797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.173822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.173957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.173984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.174197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.174223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.174339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.174365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.174498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.174523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.174667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.174693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.174903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.174928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.175085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.175114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.175246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.175274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.175390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.175418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.175598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.175624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.175749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.175775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.175880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.175905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.176072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.176098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.176234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.176259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.176357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.176383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.176519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.176548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.176720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.176746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.176887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.176916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.177067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.177095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.177249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.177275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.177440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.177465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.177639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.177666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.177804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.177830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.924 [2024-10-28 05:11:45.177937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.924 [2024-10-28 05:11:45.177963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.924 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.178092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.178121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.178269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.178295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.178422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.178448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.178614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.178650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.178823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.178848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.178952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.178977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.179110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.179136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.179269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.179298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.179480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.179508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.179670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.179697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.179836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.179880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.180044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.180070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.180202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.180227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.180386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.180411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.180597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.180625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.180773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.180802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.180986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.181011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.181152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.181177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.181333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.181362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.181483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.181512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.181664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.181694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.181853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.181879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.182000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.182026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.182160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.182185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.182349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.182377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.182505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.182530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.182704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.182747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.182878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.182907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.183027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.183055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.183206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.183232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.183344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.183370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.183510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.183535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.183704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.183731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.183867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.183893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.184076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.184105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.184278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.184307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.184482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.184510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.184663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.184689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.184802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.184828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.184936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.925 [2024-10-28 05:11:45.184961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.925 qpair failed and we were unable to recover it. 00:35:54.925 [2024-10-28 05:11:45.185104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.185129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.185278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.185304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.185454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.185482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.185637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.185666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.185820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.185849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.185988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.186014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.186126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.186151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.186263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.186289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.186433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.186459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.186625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.186668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.186808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.186834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.186942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.186967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.187123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.187149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.187347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.187373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.187527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.187555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.187677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.187707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.187851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.187892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.188055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.188081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.188193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.188236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.188424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.188449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.188612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.188642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.188784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.188813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.188956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.188981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.189134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.189163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.189336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.189365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.189523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.189549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.189697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.189744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.189897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.189926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.190077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.190106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.190241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.190267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.190384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.190410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.190571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.190599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.190749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.190778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.190916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.926 [2024-10-28 05:11:45.190941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.926 qpair failed and we were unable to recover it. 00:35:54.926 [2024-10-28 05:11:45.191088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.191114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.191287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.191316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.191464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.191493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.191686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.191712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.191844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.191873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.192071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.192096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.192257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.192300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.192462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.192487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.192627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.192658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.192825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.192854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.193016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.193042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.193185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.193210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.193355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.193380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.193494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.193520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.193683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.193714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.193856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.193883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.194043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.194072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.194192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.194220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.194399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.194427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.194551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.194579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.194762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.194789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.194890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.194933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.195097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.195122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.195258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.195284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.195456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.195484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.195660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.195689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.195862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.195891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.196051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.196077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.196221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.196247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.196383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.196408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.196532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.196561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.196697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.196723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.196863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.196889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.197002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.197028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.197185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.197214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.197342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.197368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.197535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.197561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.197748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.197774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.197947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.197988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.198155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.927 [2024-10-28 05:11:45.198180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.927 qpair failed and we were unable to recover it. 00:35:54.927 [2024-10-28 05:11:45.198314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.198340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.198497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.198525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.198688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.198717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.198878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.198904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.199003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.199029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.199201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.199226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.199335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.199360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.199494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.199519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.199657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.199701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.199850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.199878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.200030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.200059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.200218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.200244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.200384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.200410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.200549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.200575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2480822 Killed "${NVMF_APP[@]}" "$@" 00:35:54.928 [2024-10-28 05:11:45.200780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.200809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.200960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.200987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.201126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:54.928 [2024-10-28 05:11:45.201152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.201294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:54.928 [2024-10-28 05:11:45.201320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:54.928 [2024-10-28 05:11:45.201515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.201543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:54.928 [2024-10-28 05:11:45.201691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.201717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:54.928 [2024-10-28 05:11:45.201856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.201882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.202014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.202042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.202194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.202222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.202384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.202410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.202540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.202583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.202759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.202785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.202954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.202996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.203162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.203188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.203301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.203327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.203460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.203485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.203618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.203652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.203834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.203860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.204027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.204071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.204217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.204246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.204383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.204412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.204564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.204590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.204726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.204752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.928 [2024-10-28 05:11:45.204915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.928 [2024-10-28 05:11:45.204950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.928 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.205103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.205132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.205266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.205291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.205458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.205485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2481487 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:54.929 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2481487 00:35:54.929 [2024-10-28 05:11:45.205661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.205691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2481487 ']' 00:35:54.929 [2024-10-28 05:11:45.205839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.205868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:54.929 [2024-10-28 05:11:45.206031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:54.929 [2024-10-28 05:11:45.206058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:54.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:54.929 [2024-10-28 05:11:45.206203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.206230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:54.929 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:54.929 [2024-10-28 05:11:45.206374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.206401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.206560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.206589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.206734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.206760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.206897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.206939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.207061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.207088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.207228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.207254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.207399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.207427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.207566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.207592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.207729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.207755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.207893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.207920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.208091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.208117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.208262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.208288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.208453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.208479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.208649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.208675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.208838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.208863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.209001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.209027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.209152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.209178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.209319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.209345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.209482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.209508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.209665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.209692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.209828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.209854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.209992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.210018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.210153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.210180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.210317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.210342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.210486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.210512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.210650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.210677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.210842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.210867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.211004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.211031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.929 qpair failed and we were unable to recover it. 00:35:54.929 [2024-10-28 05:11:45.211165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.929 [2024-10-28 05:11:45.211190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.211296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.211322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.211464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.211494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.211640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.211667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.211793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.211819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.211980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.212006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.212168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.212193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.212303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.212328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.212459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.212485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.212655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.212680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.212815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.212840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.212954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.212980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.213124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.213149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.213274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.213300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.213467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.213494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.213640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.213667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.213832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.213859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.214011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.214038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.214201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.214227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.214345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.214374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.214486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.214513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.214681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.214711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.214855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.214881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.215031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.215057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.215175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.215201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.215322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.215348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.215493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.215524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.215672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.215702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.215865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.215891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.216032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.216058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.216202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.216228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.216399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.216424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.216541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.216568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.216686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.216713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.216856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.216882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.217004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.217030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.217164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.217190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.217335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.217361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.217497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.217526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.217668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.217695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.930 [2024-10-28 05:11:45.217831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.930 [2024-10-28 05:11:45.217857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.930 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.218033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.218058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.218211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.218239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.218358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.218384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.218506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.218534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.218714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.218741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.218878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.218904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.219058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.219084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.219199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.219225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.219355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.219384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.219555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.219581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.219701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.219727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.219837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.219864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.219991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.220017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.220193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.220220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.220359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.220387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.220504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.220529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.220642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.220669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.220813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.220839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.220973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.221010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.221132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.221159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.221301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.221328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.221434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.221461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.221627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.221659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.221773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.221800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.221943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.221972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.222140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.222168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.222319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.222348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.222502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.222529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.222688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.222715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.222884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.222915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.223061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.223087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.223251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.223277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.223392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.223418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.223555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.223581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.223707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.931 [2024-10-28 05:11:45.223736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.931 qpair failed and we were unable to recover it. 00:35:54.931 [2024-10-28 05:11:45.223850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.223878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.224017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.224044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.224186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.224212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.224397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.224424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.224591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.224618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.224727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.224754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.224891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.224917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.225060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.225086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.225212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.225239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.225392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.225418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.225526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.225556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.225705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.225732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.225872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.225898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.226062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.226088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.226232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.226273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.226437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.226464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.226578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.226604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.226778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.226805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.226976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.227001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.227150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.227178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.227331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.227358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.227505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.227537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.227708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.227735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.227856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.227883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.228023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.228050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.228204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.228231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.228407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.228433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.228577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.228603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.228774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.228801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.228940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.228966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.229116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.229142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.229281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.229308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.229423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.229449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.229596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.229622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.229771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.229797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.229911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.229942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.230088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.230117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.230258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.230295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.230407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.230433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.230577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.230602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.932 qpair failed and we were unable to recover it. 00:35:54.932 [2024-10-28 05:11:45.230724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.932 [2024-10-28 05:11:45.230750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.230863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.230892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.231035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.231062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.231172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.231198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.231312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.231338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.231502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.231528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.231668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.231695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.231799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.231828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.231941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.231971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.232112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.232138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.232276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.232303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.232441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.232467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.232585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.232610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.232748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.232775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.232917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.232946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.233096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.233123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.233260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.233290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.233456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.233482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.233663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.233690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.233808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.233835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.233951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.233978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.234153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.234181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.234327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.234355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.234497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.234523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.234671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.234698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.234804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.234830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.234948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.234974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.235142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.235169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.235277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.235304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.235421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.235447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.235587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.235614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.235756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.235783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.235902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.235934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.236085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.236111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.236280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.236307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.236436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.236463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.236578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.236605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.236755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.236783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.236936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.236962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.237112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.237138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.933 [2024-10-28 05:11:45.237279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.933 [2024-10-28 05:11:45.237306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.933 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.237418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.237444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.237582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.237609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.237754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.237782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.237898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.237925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.238091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.238120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.238264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.238291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.238457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.238483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.238588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.238615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.238796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.238833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.239000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.239031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.239177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.239207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.239414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.239454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.239626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.239662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.239818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.239844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.240016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.240042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.240161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.240187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.240332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.240359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.240467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.240493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.240534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad1330 (9): Bad file descriptor 00:35:54.934 [2024-10-28 05:11:45.240729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.240761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.240915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.240950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.241110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.241136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.241308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.241335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.241483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.241510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.241660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.241689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.241826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.241853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.241994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.242027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.242181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.242207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.242373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.242399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.242565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.242595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.242745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.242771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.242934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.242961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.243113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.243140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.243247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.243273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.243413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.243439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.243562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.243591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.243751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.243777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.243919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.243949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.244086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.244113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.244280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.244306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.934 [2024-10-28 05:11:45.244420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.934 [2024-10-28 05:11:45.244457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.934 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.244601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.244630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.244766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.244793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.244938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.244964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.245102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.245129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.245270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.245296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.245437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.245463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.245583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.245610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.245773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.245801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.245971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.246010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.246160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.246187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.246328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.246360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.246518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.246544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.246660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.246689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.246833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.246860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.247012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.247039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.247172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.247199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.247328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.247355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.247523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.247550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.247667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.247694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.247833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.247859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.248004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.248031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.248174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.248209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.248382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.248409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.248550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.248577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.248713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.248741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.248890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.248916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.249097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.249123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.249274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.249300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.249464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.249490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.249663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.249692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.249836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.249863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.250005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.250032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.250176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.250203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.250349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.250375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.250507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.935 [2024-10-28 05:11:45.250533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.935 qpair failed and we were unable to recover it. 00:35:54.935 [2024-10-28 05:11:45.250657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.250686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.250823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.250850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.250993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.251019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.251164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.251191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.251337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.251365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.251503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.251529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.251677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.251705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.251829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.251857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.251995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.252022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.252146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.252172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.252321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.252347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.252490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.252516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.252682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.252710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.252855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.252887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.253034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.253061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.253206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.253233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.253370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.253397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.253569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.253596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.253772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.253801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.253945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.253971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.254117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.254143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.254315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.254341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.254492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.254522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.254573] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:35:54.936 [2024-10-28 05:11:45.254657] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:54.936 [2024-10-28 05:11:45.254662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.254691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.254830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.254857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.255029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.255055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.255179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.255206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.255369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.255396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.255537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.255564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.255710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.255738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.255885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.255913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.256065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.256092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.256232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.256258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.256423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.256452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.256575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.256611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.256761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.256792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.256987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.257016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.257155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.257182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.257328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.936 [2024-10-28 05:11:45.257356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.936 qpair failed and we were unable to recover it. 00:35:54.936 [2024-10-28 05:11:45.257510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.257537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.257657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.257685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.257819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.257847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.257994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.258020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.258166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.258193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.258356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.258383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.258525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.258554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.258673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.258702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.258884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.258910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.259052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.259079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.259219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.259246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.259361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.259387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.259518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.259544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.259725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.259771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.259964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.259992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.260130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.260156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.260318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.260345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.260460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.260501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.260674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.260702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.260843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.260869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.261024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.261051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.261191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.261217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.261327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.261356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.261480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.261508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.261652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.261680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.261844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.261871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.262024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.262051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.262195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.262222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.262360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.262386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.262546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.262572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.262687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.262715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.262857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.262884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.263025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.263052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.263191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.263218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.263390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.263416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.263526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.263553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.263741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.263768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.263913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.263942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.937 [2024-10-28 05:11:45.264058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.937 [2024-10-28 05:11:45.264085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.937 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.264193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.264224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.264335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.264367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.264508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.264534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.264686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.264713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.264827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.264854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.264997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.265023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.265147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.265174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.265314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.265341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.265451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.265478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.265619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.265664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.265809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.265836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.266010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.266037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.266200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.266227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.266366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.266393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.266538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.266569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.266695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.266722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.266833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.266866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.267033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.267060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.267202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.267229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.267404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.267431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.267537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.267564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.267708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.267735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.267897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.267924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.268039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.268065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.268185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.268212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.268382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.268409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.268520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.268546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.268681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.268709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.268853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.268880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.269046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.269082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.269190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.269216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.269328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.269356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.269488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.269515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.269654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.269682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.269821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.269848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.269959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.269992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.270142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.270169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.270282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.270309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.270473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.270500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.938 [2024-10-28 05:11:45.270611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.938 [2024-10-28 05:11:45.270642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.938 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.270809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.270836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.270963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.270994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.271142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.271169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.271308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.271334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.271468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.271494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.271644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.271671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.271826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.271852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.271995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.272021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.272165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.272192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.272362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.272389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.272537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.272563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.272706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.272733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.272899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.272925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.273036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.273062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.273217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.273243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.273426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.273452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.273589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.273617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.273831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.273869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.274031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.274062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.274181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.274210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.274385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.274415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.274570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.274599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.274787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.274818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.274933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.274973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.275152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.275182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.275332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.275361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.275516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.275544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.275693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.275722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.275854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.275894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.276056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.276084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.276226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.276253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.276401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.276428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.276538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.276565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.276677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.276704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.939 [2024-10-28 05:11:45.276877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.939 [2024-10-28 05:11:45.276904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.939 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.277076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.277104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.277280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.277306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.277423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.277450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.277613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.277653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.277811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.277839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.277989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.278016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.278159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.278186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.278371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.278397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.278540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.278567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.278713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.278741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.278874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.278900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.279045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.279071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.279257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.279284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.279400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.279426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.279601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.279627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.279770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.279797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.279960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.279989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.280169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.280196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.280334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.280363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.280479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.280506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.280667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.280712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.280920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.280959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.281104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.281132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.281276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.281307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.281474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.281501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.281621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.281654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.281801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.281827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.281938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.281966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.282087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.282113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.282278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.282305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.282442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.282468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.282642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.282669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.282782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.282816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.282960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.282989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.283129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.283155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.283290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.283316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.283489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.283515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.283660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.283687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.283861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.283888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.940 [2024-10-28 05:11:45.284070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.940 [2024-10-28 05:11:45.284101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.940 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.284239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.284266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.284431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.284457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.284599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.284625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.284793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.284820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.284943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.284971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.285085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.285112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.285263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.285290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.285426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.285454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.285597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.285623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.285733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.285772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.285943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.285970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.286096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.286123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.286263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.286289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.286436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.286465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.286630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.286662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.286809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.286835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.286952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.286978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.287095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.287122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.287296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.287323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.287458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.287484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.287649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.287683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.287850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.287878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.288041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.288068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.288226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.288252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.288366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.288394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.288546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.288572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.288696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.288724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.288895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.288922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.289121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.289161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.289288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.289316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.289483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.289513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.289655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.289682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.289833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.289860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.290005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.290032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.290209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.290236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.290371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.290398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.290517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.290545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.290722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.290749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.290891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.290919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.941 qpair failed and we were unable to recover it. 00:35:54.941 [2024-10-28 05:11:45.291089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.941 [2024-10-28 05:11:45.291116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.291264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.291292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.291412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.291438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.291580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.291606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.291755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.291782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.291924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.291951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.292097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.292122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.292259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.292286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.292432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.292464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.292610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.292641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.292781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.292808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.292990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.293016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.293130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.293157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.293334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.293373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.293487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.293515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.293654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.293681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.293824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.293851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.293990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.294017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.294160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.294186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.294292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.294319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.294434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.294462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.294587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.294614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.294756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.294783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.294922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.294950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.295114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.295140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.295253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.295281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.295422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.295449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.295615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.295649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.295760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.295787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.295938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.295964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.296113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.296139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.296270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.296298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.296440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.296466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.296572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.296598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.296745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.296774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.296926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.296953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.297115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.297141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.297258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.297286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.297423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.297450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.297572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.297600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.942 qpair failed and we were unable to recover it. 00:35:54.942 [2024-10-28 05:11:45.297785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.942 [2024-10-28 05:11:45.297813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.297966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.297993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.298118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.298145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.298294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.298320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.298460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.298486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.298650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.298679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.298817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.298844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.299014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.299042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.299154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.299181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.299338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.299365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.299533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.299559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.299703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.299729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.299863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.299889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.300036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.300065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.300202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.300228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.300375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.300403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.300567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.300593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.300715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.300743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.300919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.300957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.301062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.301090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.301240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.301266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.301404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.301431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.301554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.301580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.301727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.301757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.301879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.301904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.302047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.302082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.302224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.302250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.302391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.302418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.302557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.302583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.302737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.302764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.302909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.302945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.303087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.303125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.303290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.303317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.303461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.303487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.303641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.303668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.303809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.303834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.303952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.303981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.304095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.304122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.304259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.304287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.304429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.304455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.943 [2024-10-28 05:11:45.304591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.943 [2024-10-28 05:11:45.304617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.943 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.304768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.304795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.304933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.304960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.305102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.305128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.305286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.305313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.305480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.305506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.305667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.305694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.305835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.305861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.306031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.306057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.306227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.306253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.306393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.306419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.306558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.306586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.306763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.306790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.306931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.306958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.307096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.307123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.307232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.307258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.307363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.307389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.307506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.307532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.307681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.307708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.307847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.307873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.308011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.308036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.308202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.308228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.308381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.308407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.308547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.308573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.308712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.308739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.308884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.308910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.309051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.309077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.309225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.309251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.309369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.309394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.309509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.309535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.309677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.309703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.309815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.309840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.309985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.310024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.310174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.310201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.310314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.310341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.944 [2024-10-28 05:11:45.310480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.944 [2024-10-28 05:11:45.310508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.944 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.310656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.310683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.310795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.310823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.310928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.310955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.311072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.311097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.311241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.311267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.311401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.311427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.311542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.311569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.311715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.311741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.311880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.311906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.312045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.312070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.312209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.312235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.312376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.312402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.312536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.312562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.312688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.312714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.312892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.312920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.313071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.313097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.313238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.313264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.313373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.313399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.313563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.313589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.313744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.313771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.313908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.313940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.314076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.314103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.314242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.314268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.314416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.314443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.314609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.314643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.314769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.314796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.314937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.314963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.315107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.315134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.315271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.315297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.315436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.315464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.315611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.315642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.315812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.315838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.315986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.316013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.316153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.316179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.316347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.316373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.316490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.316517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.316687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.316714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.316823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.316849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.316972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.316998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.945 [2024-10-28 05:11:45.317106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.945 [2024-10-28 05:11:45.317132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.945 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.317237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.317266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.317379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.317405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.317570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.317595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.317736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.317762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.317928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.317953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.318121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.318147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.318282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.318308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.318424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.318449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.318557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.318583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.318700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.318725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.318889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.318915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.319082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.319107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.319273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.319298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.319407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.319434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.319582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.319609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.319752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.319778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.319938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.319977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.320159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.320186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.320330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.320358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.320501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.320528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.320666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.320693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.320833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.320859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.320994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.321020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.321169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.321196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.321334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.321361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.321467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.321494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.321671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.321698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.321812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.321843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.321979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.322005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.322143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.322169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.322331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.322357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.322470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.322496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.322643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.322669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.322835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.322861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.323008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.323033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.323165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.323191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.323357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.323383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.323525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.323553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.323717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.323745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.323898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.946 [2024-10-28 05:11:45.323924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.946 qpair failed and we were unable to recover it. 00:35:54.946 [2024-10-28 05:11:45.324097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.324123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.324273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.324299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.324415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.324441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.324609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.324642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.324760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.324784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.324921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.324947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.325113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.325139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.325266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.325292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.325399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.325425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.325600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.325648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.325762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.325789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.325949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.325976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.326140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.326166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.326277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.326304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.326445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.326476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.326619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.326653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.326769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.326796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.326936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.326963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.327077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.327103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.327245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.327271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.327436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.327462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.327606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.327644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.327757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.327784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.327921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.327947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.328087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.328112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.328278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.328304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.328420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.328446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.328581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.328622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.328779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.328808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.328950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.328977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.329116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.329143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.329261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.329287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.329428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.329455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.329612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.329662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.329843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.329871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.330037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.330064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.330165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.330192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.330309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.330335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.330451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.330477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.330620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.947 [2024-10-28 05:11:45.330653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.947 qpair failed and we were unable to recover it. 00:35:54.947 [2024-10-28 05:11:45.330796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.330822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.330966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.330999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.331135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.331162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.331325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.331351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.331491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.331519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.331684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.331725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.331852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.331881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.332050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.332077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.332246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.332272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.332389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.332416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.332547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.332574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.332713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.332742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.332884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.332911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.333057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.333084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.333255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.333282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.333427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.333455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.333578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.333605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.333733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.333761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.333909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.333938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.334104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.334130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.334268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.334295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.334436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.334463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.334597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.334623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.334808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.334835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.334939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.334965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.335135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.335162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.335327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.335355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.335498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.335524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.335659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.335690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.335857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.335884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.336021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.336048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.336162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.336189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.336303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.336331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.336473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.336501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.336667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.336695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.948 [2024-10-28 05:11:45.336841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.948 [2024-10-28 05:11:45.336870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.948 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.337015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.337042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.337207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.337233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.337374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.337400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.337521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.337549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.337701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.337728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.337862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.337889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.338026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.338053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.338220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.338246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.338417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.338445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.338588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.338615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.338757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.338784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.338947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.338974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.339111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.339138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.339301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.339328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.339468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.339495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.339641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.339669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.339812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.339838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.339983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.340009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.340118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.340145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.340294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.340321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.340464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.340493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.340671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.340698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.340840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.340869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.341008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.341035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.341190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.341217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.341376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.341403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.341556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.341594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.341757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.341786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.341935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.341961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.342100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.342127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.342290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.342316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.342483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.342510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.342672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.342700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.342822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.342850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.342986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.343013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.343132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.343160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.343309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.343335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.343483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.343512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.343631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.343663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.949 [2024-10-28 05:11:45.343804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.949 [2024-10-28 05:11:45.343831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.949 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.343946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.343973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.344088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.344114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.344223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.344250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.344393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.344419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.344570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.344611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.344770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.344799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.344952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.344981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.345150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.345188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.345331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.345360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.345478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.345506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.345624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.345664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.345811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.345838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.345983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.346009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.346136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.346164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.346331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.346358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.346516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.346543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.346679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.346706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.347265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.347296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.347464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.347491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.347605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.347650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.347797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.347824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.347971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.347999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.348144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.348171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.348277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.348304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.348445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.348473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.348629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.348677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.348813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.348842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.348995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.349023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.349136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.349163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.349319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.349345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.349466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.349493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.349631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.349670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.349802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.349829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.349970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.349997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.350174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.350201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.350317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.350343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.350475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.350501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.350646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.350674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.350814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.350840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.950 [2024-10-28 05:11:45.350990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.950 [2024-10-28 05:11:45.351016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.950 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.351135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.351162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.351306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.351343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.351519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.351546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.351700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.351727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.351840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.351867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.352013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.352040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.352206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.352240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.352405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.352432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.352597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.352641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.352757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.352784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.352922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.352950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.353070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.353096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.353249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.353276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.353393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.353420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.353530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.353556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.353688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.353715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.353836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.353862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.354028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.354054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.354192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.354218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.354358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.354386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.354531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.354558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.354687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.354713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.354837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.354863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.354982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.355009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.355175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.355201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.355339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.355365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.355534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.355561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.355719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.355759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.355905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.355941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.356087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.356114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.356245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.356272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.356441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.356468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.356639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.356667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.356781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.356814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.356956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.356983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.357128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.357154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.357279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.357308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.357456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.357482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.357644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.357671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.357805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.951 [2024-10-28 05:11:45.357832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.951 qpair failed and we were unable to recover it. 00:35:54.951 [2024-10-28 05:11:45.357950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.357976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.358116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.358142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.358282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.358308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.358452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.358478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.358595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.358623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.358774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.358802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.358915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.358948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.359090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.359117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.359255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.359282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.359446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.359473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.359640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.359668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.359791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.359818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.359944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.359971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.360106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.360132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.360270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.360296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.360400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.360426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.360546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.360585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.360742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.360770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.360916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.360943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.361089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.361118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.361240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.361271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.361441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.361468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.361613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.361650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.361811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.361838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.361975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.362009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.362149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.362175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.362278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.362304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.362438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.362464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.362611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.362647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.362816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.362842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.362984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.363017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.363157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.363184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.363306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.363333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.363474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.363501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.363641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.363682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.952 [2024-10-28 05:11:45.363830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.952 [2024-10-28 05:11:45.363858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.952 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.363972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.363999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.364147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.364175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.364321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.364347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.364515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.364542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.364692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.364721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.364862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.364888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.365032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.365058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.365196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.365222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.365361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.365387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.365529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.365555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.365674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.365701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.365843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.365873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.366021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.366048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.366185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.366212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.366353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.366379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.366519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.366547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.366697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.366723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.366864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.366891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.367030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.367057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.367201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.367227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.367363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.367390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.367506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.367534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.367673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.367700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.367818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.367845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.368009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.368035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.368148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.368174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.368317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.368344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.368452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.368479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.368643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.368670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.368809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.368836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.368982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.369009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.369181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.369208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.369374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.369400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.369522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.369548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.369699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.369726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.369870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.369896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.370049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.370075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.370219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.370246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.370364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.370395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.953 [2024-10-28 05:11:45.370559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.953 [2024-10-28 05:11:45.370585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.953 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.370763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.370790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.370925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.370952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.371075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.371102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.371219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.371246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.371411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.371437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.371576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.371602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.371782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.371809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.371951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.371977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.372139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.372165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.372305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.372331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.372493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.372520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.372687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.372714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.372831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.372859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.372982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.373009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.373153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.373179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.373349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.373375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.373517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.373543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.373661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.373688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.373838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.373878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.374021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.374049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.374192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.374219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.374384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.374411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.374578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.374605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.374741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.374769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.374914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.374948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.375103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.375136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.375279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.375307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.375474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.375501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.375650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.375679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.375793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.375820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.375955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.375981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.376148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.376174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.376314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.376340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.376483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.376509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.376654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.376681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.376830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.376857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.376996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.377025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.377157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.377184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.377352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.377379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.954 qpair failed and we were unable to recover it. 00:35:54.954 [2024-10-28 05:11:45.377506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.954 [2024-10-28 05:11:45.377533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.377677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.377705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.377847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.377874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.378058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.378085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.378223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.378252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.378375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.378401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.378554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.378581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.378746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.378787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.378955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.378981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.379122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.379148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.379290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.379316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.379422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.379449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.379588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.379614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.379764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.379795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.379940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.379966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.380121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.380147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.380312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.380338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.380481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.380508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.380623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.380656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.380821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.380848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.380963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.380989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.381104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.381130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.381247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.381274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.381410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.381436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.381555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.381581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.381702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.381729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.381869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.381896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.382061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.382102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.382287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.382315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.382455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.382482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.382623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.382659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.382828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.382854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.382975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.383003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.383172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.383199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.383341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.383367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.383510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.383536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.383695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.383722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.383828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.383855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.384004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.384030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.384197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.384223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.955 [2024-10-28 05:11:45.384339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.955 [2024-10-28 05:11:45.384369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.955 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.384485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.384512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.384655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.384682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.384820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.384847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.385016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.385042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.385183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.385209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.385361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.385387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.385531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.385557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.385712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.385753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.385908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.385936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.386110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.386137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.386291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.386318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.386430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.386457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.386632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.386665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.386793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.386820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.386963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.386991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.387133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.387160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.387322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.387349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.387493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.387520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.387660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.387688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.387816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.387843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.387955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.387982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.388095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.388123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.388291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.388318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.388461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.388489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.388603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.388631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.388808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.388835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.388959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.388990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.389125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.389152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.389291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.389317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.389459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.389488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.389629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.389669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.389787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.389813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.389955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.389982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.390147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.390173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.390342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.390369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.390510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.390537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.390673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.390700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.390841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.390868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.391046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.391073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.956 [2024-10-28 05:11:45.391214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.956 [2024-10-28 05:11:45.391242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.956 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.391365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.391392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.391532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.391559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.391707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.391735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.391882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.391908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.392077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.392104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.392238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.392265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.392402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.392429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.392559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.392585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.392705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.392732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.392894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.392921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.393062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.393088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.393227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.393253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.393392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.393419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.393572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.393600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.393772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.393799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.393970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.393996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.394142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.394169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.394313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.394339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.394501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.394528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.394695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.394722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.394861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.394888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.395049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.395075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.395191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.395219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.395365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.395393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.395567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.395595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.395766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.395794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.395959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.395990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.396161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.396188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.396352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.396379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.396498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.396525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.396693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.396721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.396863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.396890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.397007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.397035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.397179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.397207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.957 [2024-10-28 05:11:45.397325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.957 [2024-10-28 05:11:45.397353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.957 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.397470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.397497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.397647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.397687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.397736] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:54.958 [2024-10-28 05:11:45.397838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.397865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.398009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.398036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.398212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.398240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.398377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.398403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.398516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.398542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.398661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.398689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.398828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.398855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.399000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.399027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.399163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.399189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.399316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.399343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.399494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.399521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.399683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.399724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.399877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.399905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.400071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.400097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.400252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.400279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.400416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.400447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.400588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.400614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.400760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.400786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.400930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.400957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.401067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.401093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.401256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.401282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.401433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.401460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.401626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.401664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.401811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.401840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.401985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.402012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.402181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.402207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.402375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.402401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.402547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.402573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.402715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.402742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.402888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.402917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.403061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.403088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.403220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.403246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.403386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.403412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.403553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.403580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.403730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.403759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.403927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.403956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.404074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.958 [2024-10-28 05:11:45.404102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.958 qpair failed and we were unable to recover it. 00:35:54.958 [2024-10-28 05:11:45.404269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.404295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.404457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.404482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.404593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.404621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.404795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.404822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.404986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.405013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.405153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.405185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.405355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.405382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.405496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.405524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.405643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.405670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.405808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.405835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.405976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.406003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.406169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.406195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.406303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.406329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.406436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.406466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.406611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.406643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.406760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.406786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.406925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.406951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.407064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.407091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.407232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.407260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.407439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.407467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.407582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.407609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.407754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.407782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.407898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.407926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.408068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.408094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.408242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.408268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.408431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.408458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.408623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.408657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.408769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.408796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.408962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.408988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.409106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.409134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.409284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.409310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.409449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.409478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.409660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.409693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.409860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.409887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.410027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.410053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.410217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.410243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.410378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.410405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.410581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.410622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.410779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.410807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.410926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.410952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.959 qpair failed and we were unable to recover it. 00:35:54.959 [2024-10-28 05:11:45.411066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.959 [2024-10-28 05:11:45.411093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.411212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.411239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.411363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.411390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.411539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.411567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.411684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.411712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.411831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.411858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.412026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.412053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.412194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.412220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.412361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.412386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.412504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.412532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.412680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.412708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.412825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.412852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.412987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.413014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.413133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.413160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.413270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.413296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.413460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.413488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.413646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.413674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.413817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.413843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.414009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.414035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.414142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.414174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.414282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.414308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.414416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.414443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.414557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.414583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.414709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.414737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.414853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.414881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.415022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.415050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.415158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.415184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.415346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.415373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.415514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.415540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.415656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.415683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.415826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.415853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.415989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.416015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.416160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.416186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.416309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.416338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.416483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.416509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.416644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.416672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.416822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.416849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.416987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.417013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.417163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.417189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.417305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.417332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.960 [2024-10-28 05:11:45.417458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.960 [2024-10-28 05:11:45.417484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.960 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.417654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.417681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.417819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.417845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.417980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.418006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.418172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.418199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.418310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.418337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.418450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.418478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.418615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.418648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.418789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.418816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.418936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.418963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.419105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.419131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.419303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.419330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.419470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.419496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.419615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.419649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.419771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.419797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.419942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.419968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.420111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.420138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.420254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.420282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.420442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.420469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.420612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.420644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.420776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.420803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.420952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.420978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.421119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.421145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.421273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.421299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.421408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.421434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.421568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.421594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.421738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.421765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.421872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.421898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.422027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.422053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.422191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.422218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.422351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.422377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.422480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.422506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.422619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.422653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.422770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.422803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.422921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.422947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.423090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.423116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.961 qpair failed and we were unable to recover it. 00:35:54.961 [2024-10-28 05:11:45.423258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.961 [2024-10-28 05:11:45.423284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.423427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.423453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.423597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.423643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.423819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.423848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.423985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.424011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.424175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.424201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.424321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.424347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.424488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.424514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.424684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.424712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.424855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.424883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.425022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.425049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.425194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.425220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.425358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.425384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.425492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.425518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.425680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.425707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.425847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.425873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.426037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.426063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.426228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.426254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.426390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.426416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.426581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.426608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.426742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.426771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.426889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.426915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.427026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.427052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.427191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.427217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.427384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.427416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.427589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.427616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.427746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.427773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.427898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.427924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.428067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.428093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.428195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.428221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.428364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.428390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.428560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.428586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.428725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.428752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.428894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.428920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.429067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.429093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.429232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.429258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.429397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.429423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.429560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.429586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.429740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.429767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.429903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.429930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.962 qpair failed and we were unable to recover it. 00:35:54.962 [2024-10-28 05:11:45.430091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.962 [2024-10-28 05:11:45.430118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.430258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.430284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.430452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.430478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.430584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.430611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.430732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.430761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.430906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.430932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.431095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.431121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.431260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.431288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.431399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.431426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.431581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.431608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.431761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.431789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.431895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.431926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.432038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.432065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.432177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.432203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.432342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.432369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.432483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.432509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.432674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.432701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.432807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.432835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.432944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.432971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.433132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.433159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.433296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.433323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.433486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.433512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.433702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.433743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.433888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.433916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.434031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.434058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.434230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.434257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.434372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.434398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.434541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.434568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.434695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.434722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.434836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.434863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.435004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.435030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.435192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.435219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.435361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.435388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.435506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.435534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.435674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.435702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.435870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.435897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.436065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.436092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.436258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.436284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.436441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.436468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.436518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:54.963 [2024-10-28 05:11:45.436585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.963 [2024-10-28 05:11:45.436613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.963 qpair failed and we were unable to recover it. 00:35:54.963 [2024-10-28 05:11:45.436784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.436810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.436947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.436974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.437087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.437114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.437227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.437254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.437387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.437415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.437580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.437607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.437762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.437789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.437906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.437932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.438083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.438110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.438276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.438303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.438451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.438478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.438660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.438688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.438850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.438877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.439014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.439040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.439184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.439211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.439375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.439402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.439510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.439537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.439662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.439690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.439831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.439858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.439980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.440006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.440155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.440182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.440294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.440321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.440442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.440470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.440610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.440644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.440771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.440803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.440952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.440979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.441086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.441112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.441231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.441259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.441412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.441440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.441577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.441604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.441735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.441775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.441952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.441980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.442083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.442109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.442217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.442244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.442373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.442399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.442542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.442569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.442714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.442743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.442859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.442886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.442999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.443027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.443195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.964 [2024-10-28 05:11:45.443222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.964 qpair failed and we were unable to recover it. 00:35:54.964 [2024-10-28 05:11:45.443342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.443368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.443516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.443543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.443689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.443719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.443867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.443894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.444033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.444061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.444227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.444253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.444401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.444427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.444539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.444567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.444692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.444721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.444861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.444887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.445037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.445064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.445203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.445234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.445374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.445402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.445576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.445603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.445757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.445786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.445932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.445959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.446105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.446131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.446245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.446272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.446382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.446409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.446551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.446577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.446746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.446773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.446914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.446942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.447052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.447080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.447220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.447247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.447388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.447415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.447538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.447565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.447725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.447766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.447940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.447968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.448109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.448136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.448251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.448278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.448420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.448446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.448609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.448644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.448763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.448791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.448935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.448962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.449110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.449137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.449278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.449305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.965 [2024-10-28 05:11:45.449413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.965 [2024-10-28 05:11:45.449439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.965 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.449576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.449604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.449811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.449857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.449988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.450017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.450187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.450213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.450345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.450372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.450523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.450549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.450672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.450700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.450847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.450873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.450987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.451014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.451134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.451161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.451308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.451335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.451477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.451505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.451669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.451696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.451852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.451880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.452024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.452051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.452226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.452253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.452374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.452401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.452571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.452598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.452768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.452795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.452916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.452943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.453109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.453137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.453251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.453279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.453420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.453449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.453571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.453598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.453780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.453808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.453952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.453979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.454094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.454121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.966 qpair failed and we were unable to recover it. 00:35:54.966 [2024-10-28 05:11:45.454265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.966 [2024-10-28 05:11:45.454292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.454426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.454458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.454603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.454631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.454758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.454786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.454933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.454962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.455141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.455169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.455337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.455365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.455502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.455530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.455653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.455681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.455850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.455877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.456018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.456046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.456195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.456222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.456364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.456391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.456523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.456550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.456682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.456711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.456884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.456911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.457053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.457081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.457215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.457243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.457383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.457410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.457523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.457552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.457711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.457739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.457878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.457906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.458071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.458099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.458225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.458252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.458396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.458424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.458615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.458665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.458802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.458831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.458974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.459001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.459144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.459172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.459296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.459325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.459495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.459522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.459674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.459702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.459822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.459850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.460016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.460043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.460211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.460238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.460346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.460373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.460512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.460539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.460706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.460735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.460872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.460899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.461039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.461065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.461197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.461224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.461339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.461366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.967 qpair failed and we were unable to recover it. 00:35:54.967 [2024-10-28 05:11:45.461516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.967 [2024-10-28 05:11:45.461543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.461707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.461746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.461900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.461929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.462069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.462096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.462237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.462265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.462404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.462432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.462546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.462574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.462718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.462746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.462865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.462893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.463035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.463063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.463246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.463274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.463395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.463423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.463546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.463575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.463744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.463787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.463910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.463939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.464067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.464097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.464212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.464240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.464342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.464369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.464522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.464549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.464691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.464719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.464860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.464887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.465060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.465087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.465223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.465250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.465416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.465443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.465560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.465587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.465714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.465742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.465870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.465902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.466084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.466112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.466230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.466256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.466402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.466429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.466572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.466599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.466755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.466783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.466949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.466977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.467127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.467155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.467295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.467323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.467457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.467484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.467596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.467639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.467761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.467788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.467933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.467959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.468085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.468112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.468248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.468275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.468446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.468474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.468584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.468613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.968 [2024-10-28 05:11:45.468789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.968 [2024-10-28 05:11:45.468831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.968 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.468979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.469009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.469121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.469149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.469290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.469318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.469470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.469498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.469654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.469682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.469847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.469876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.470028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.470057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.470194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.470222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.470375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.470402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.470520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.470554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.470696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.470725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.470873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.470901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.471062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.471090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.471232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.471259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.471405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.471434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.471603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.471647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.471792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.471820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.471986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.472013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.472128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.472155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.472295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.472323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.472490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.472518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.472679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.472706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.472840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.472868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.473023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.473050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.473216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.473243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.473407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.473435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.473576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.473603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.473762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.473790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.473932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.473959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.474111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.474139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.474280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.474308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.474450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.474477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.474620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.474653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.474774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.474801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.474921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.474948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.475088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.475117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.475281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.475321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.475448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.475476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.475646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.475687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.475835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.475864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.476020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.476047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.476157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.476184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.476326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.969 [2024-10-28 05:11:45.476354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.969 qpair failed and we were unable to recover it. 00:35:54.969 [2024-10-28 05:11:45.476496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.476522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.476701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.476743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.476881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.476911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.477058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.477094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.477237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.477265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.477403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.477430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.477571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.477598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.477766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.477793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.477960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.477987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.478125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.478152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.478261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.478289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.478406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.478433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.478559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.478586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.478728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.478769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.478923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.478962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.479133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.479161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.479300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.479328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.479468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.479495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.479613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.479646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.479757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.479786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.479912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.479950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.480090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.480117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.480284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.480312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.480422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.480449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.480560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.480587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.480704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.480732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.480887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.480915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.481084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.481112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.481226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.481254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.481399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.481428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.481549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.481576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.481701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.481730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.481845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.481872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.482013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.482045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.482189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.482216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.482383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.482411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.970 [2024-10-28 05:11:45.482554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.970 [2024-10-28 05:11:45.482582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.970 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.482752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.482780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.482943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.482971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.483119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.483147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.483261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.483288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.483421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.483449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.483588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.483616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.483788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.483816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.483932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.483960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.484085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.484112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.484219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.484246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.484356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.484383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.484502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.484529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.484653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.484680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.484816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.484842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.484949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.484976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.485083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.485110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.485220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.485249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.485366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.485393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.485518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.485558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.485652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:54.971 [2024-10-28 05:11:45.485682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.485688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:54.971 [2024-10-28 05:11:45.485706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:54.971 [2024-10-28 05:11:45.485710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.971 [2024-10-28 05:11:45.485719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.485730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:54.971 [2024-10-28 05:11:45.485853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.485880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.486021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.486054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.486221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.486252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.486401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.486429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.486534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.486562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.486716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.486744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.486855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.486882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.487000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.487026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.487162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.487189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.487305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.487332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.487332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:54.971 [2024-10-28 05:11:45.487383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:54.971 [2024-10-28 05:11:45.487440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.487465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.487429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:54.971 [2024-10-28 05:11:45.487433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:54.971 [2024-10-28 05:11:45.487583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.487609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.487729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.487757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.487873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.487904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.488046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.488072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.488183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.488209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.488368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.488408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.488539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.488567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.488718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.488746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.488864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.971 [2024-10-28 05:11:45.488891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.971 qpair failed and we were unable to recover it. 00:35:54.971 [2024-10-28 05:11:45.489035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.972 [2024-10-28 05:11:45.489063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.972 qpair failed and we were unable to recover it. 00:35:54.972 [2024-10-28 05:11:45.489178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.972 [2024-10-28 05:11:45.489206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:54.972 qpair failed and we were unable to recover it. 00:35:54.972 [2024-10-28 05:11:45.489348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.972 [2024-10-28 05:11:45.489375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.972 qpair failed and we were unable to recover it. 00:35:54.972 [2024-10-28 05:11:45.489512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.972 [2024-10-28 05:11:45.489538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.972 qpair failed and we were unable to recover it. 00:35:54.972 [2024-10-28 05:11:45.489653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.972 [2024-10-28 05:11:45.489680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.972 qpair failed and we were unable to recover it. 00:35:54.972 [2024-10-28 05:11:45.489806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.972 [2024-10-28 05:11:45.489833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.972 qpair failed and we were unable to recover it. 00:35:54.972 [2024-10-28 05:11:45.489948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.972 [2024-10-28 05:11:45.489974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.972 qpair failed and we were unable to recover it. 00:35:54.972 [2024-10-28 05:11:45.490089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.972 [2024-10-28 05:11:45.490122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.972 qpair failed and we were unable to recover it. 00:35:54.972 [2024-10-28 05:11:45.490263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.972 [2024-10-28 05:11:45.490290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:54.972 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.490403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.490441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.490563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.490590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.490705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.490734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.490861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.490902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.491022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.491051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.491165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.491192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.491314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.491341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.491466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.491492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.491609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.491641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.491762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.491791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.491936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.491964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.492079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.492111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.492229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.492257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.492403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.492430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.492546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.492574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.492696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.492725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.492854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.492881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.493025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.493052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.493163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.493189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.253 [2024-10-28 05:11:45.493349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.253 [2024-10-28 05:11:45.493375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.253 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.493480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.493507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.493649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.493689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.493858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.493886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.493999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.494029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.494170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.494197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.494314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.494341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.494456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.494484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.494628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.494661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.494776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.494804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.494922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.494948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.495054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.495081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.495226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.495254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.495365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.495392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.495495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.495522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.495665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.495693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.495838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.495864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.495999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.496026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.496168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.496195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.496323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.496350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.496479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.496519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.496653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.496683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.496793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.496820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.496930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.496956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.497101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.497127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.497234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.497270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.497381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.497409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.497526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.497552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.497667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.497695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.497803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.497829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.497972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.497998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.498107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.498133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.498282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.498311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.498479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.498506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.498643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.498671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.498779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.498806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.498943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.498969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.499081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.499108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.499249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.499278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.254 [2024-10-28 05:11:45.499435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.254 [2024-10-28 05:11:45.499462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.254 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.499597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.499625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.499755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.499783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.499920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.499947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.500094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.500121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.500236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.500264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.500382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.500409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.500527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.500554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.500688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.500715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.500826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.500853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.500990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.501017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.501133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.501162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.501295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.501323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.501481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.501522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.501651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.501680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.501817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.501843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.501961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.501988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.502112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.502140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.502263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.502290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.502401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.502430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.502568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.502601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.502740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.502767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.502910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.502936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.503049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.503076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.503222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.503248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.503354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.503381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.503524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.503553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.503677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.503704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.503814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.503842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.503982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.504010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.504116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.504143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.504287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.504314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.504431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.504459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.504560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.504587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.504708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.504736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.504849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.504876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.505000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.505026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.505185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.505212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.505334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.505360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.505465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.505492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.505639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.255 [2024-10-28 05:11:45.505668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.255 qpair failed and we were unable to recover it. 00:35:55.255 [2024-10-28 05:11:45.505838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.505866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.505979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.506007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.506126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.506153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.506264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.506290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.506402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.506430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.506545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.506572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.506701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.506741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.506914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.506942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.507104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.507131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.507234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.507261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.507377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.507403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.507568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.507594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.507715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.507743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.507898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.507926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.508041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.508067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.508172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.508198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.508299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.508326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.508460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.508486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.508623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.508658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.508776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.508804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.508917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.508944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.509091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.509118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.509241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.509268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.509418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.509445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.509581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.509621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.509744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.509773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.509910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.509937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.510055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.510081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.510205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.510232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.510348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.510374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.510511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.510538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.510644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.510672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.510816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.510842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.510966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.511006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.511150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.511179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.511288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.511316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.511433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.511461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.511572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.511600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.511725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.511753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.511877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.256 [2024-10-28 05:11:45.511906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.256 qpair failed and we were unable to recover it. 00:35:55.256 [2024-10-28 05:11:45.512046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.512073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.512181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.512208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.512340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.512367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.512481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.512507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.512621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.512661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.512809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.512835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.512950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.512976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.513148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.513174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.513312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.513341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.513460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.513487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.513627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.513661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.513800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.513828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.513940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.513967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.514132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.514159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.514268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.514295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.514411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.514437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.514574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.514600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.514714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.514741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.514852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.514878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.514989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.515016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.515127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.515156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.515275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.515302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.515440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.515467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.515589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.515616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.515733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.515760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.515901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.515928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.516045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.516072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.516186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.516215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.516326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.516353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.516472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.516499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.516645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.516674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.516787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.516814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.516965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.516994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.517106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.517137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.257 qpair failed and we were unable to recover it. 00:35:55.257 [2024-10-28 05:11:45.517274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.257 [2024-10-28 05:11:45.517301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.517420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.517446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.517584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.517611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.517735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.517762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.517878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.517905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.518040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.518066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.518167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.518193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.518310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.518336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.518483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.518509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.518674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.518701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.518880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.518906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.519049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.519075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.519189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.519215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.519358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.519388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.519526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.519553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.519674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.519702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.519853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.519881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.519991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.520017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.520157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.520184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.520294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.520322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.520440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.520466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.520575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.520602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.520753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.520779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.520893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.520919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.521033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.521060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.521207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.521233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.521347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.521378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.521496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.521522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.521656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.521699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.521873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.521902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.522042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.522070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.522225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.522252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.522385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.522411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.522555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.522582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.522693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.522721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.522838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.522864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.522974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.523001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.523142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.523169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.523311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.523338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.523444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.523471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.258 [2024-10-28 05:11:45.523580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.258 [2024-10-28 05:11:45.523606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.258 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.523734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.523762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.523905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.523932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.524078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.524104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.524218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.524245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.524360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.524386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.524499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.524525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.524672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.524699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.524810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.524837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.524945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.524972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.525090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.525118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.525260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.525287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.525394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.525421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.525574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.525604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.525751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.525779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.525897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.525923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.526049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.526076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.526190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.526216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.526328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.526355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.526456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.526483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.526603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.526629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.526769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.526795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.526903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.526929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.527040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.527066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.527206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.527232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.527352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.527378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.527495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.527522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.527653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.527694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.527888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.527917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.528041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.528068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.528213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.528240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.528375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.528401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.528515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.528542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.528658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.528687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.528800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.528827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.528942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.528969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.529087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.529113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.529245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.529272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.529386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.529412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.529557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.529583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.529717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.259 [2024-10-28 05:11:45.529744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.259 qpair failed and we were unable to recover it. 00:35:55.259 [2024-10-28 05:11:45.529857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.529884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.530041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.530067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.530177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.530204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.530350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.530376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.530520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.530547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.530687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.530714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.530829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.530855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.531011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.531052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.531176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.531205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.531319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.531346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.531511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.531538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.531668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.531711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.531832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.531861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.532007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.532035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.532156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.532182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.532325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.532352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.532485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.532514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.532624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.532658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.532801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.532828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.532990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.533016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.533151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.533178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.533296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.533323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.533438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.533464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.533584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.533624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.533778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.533807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.533917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.533945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.534060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.534094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.534200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.534227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.534370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.534398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.534510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.534538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.534661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.534688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.534797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.534823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.534925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.534951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.535117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.535143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.535264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.535291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.535408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.535437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.535571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.535612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.535770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.535798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.535908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.535935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.536046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.260 [2024-10-28 05:11:45.536073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.260 qpair failed and we were unable to recover it. 00:35:55.260 [2024-10-28 05:11:45.536246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.536272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.536382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.536411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.536570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.536610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.536767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.536796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.536931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.536959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.537080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.537107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.537242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.537270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.537406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.537433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.537580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.537606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.537724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.537751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.537894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.537922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.538053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.538093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.538207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.538235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.538407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.538439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.538549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.538575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.538686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.538714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.538816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.538842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.538956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.538984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.539106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.539133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.539253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.539283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.539393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.539421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.539525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.539552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.539668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.539696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.539833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.539859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.539967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.539993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.540127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.540154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.540266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.540294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.540438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.540465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.540614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.540648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.540768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.540795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.540957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.540983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.541102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.541130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.541239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.541266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.541417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.541457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.541609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.541645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.541784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.541811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.541927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.541954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.542096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.542124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.542246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.542286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.542430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-10-28 05:11:45.542457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-10-28 05:11:45.542581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.542621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.542754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.542782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.542889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.542916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.543021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.543049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.543164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.543191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.543294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.543321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.543436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.543463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.543565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.543594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.543727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.543757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.543903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.543932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.544033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.544060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.544224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.544252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.544368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.544396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.544537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.544565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.544687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.544716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.544833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.544860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.545000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.545026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.545133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.545160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.545278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.545318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.545463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.545492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.545653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.545693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.545811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.545839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.545957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.545983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.546085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.546112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.546249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.546278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.546391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.546419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.546569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.546611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.546735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.546763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.546900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.546927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.547090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.547116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.547233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.547259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.547368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.547394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.547545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.547571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.547716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.547743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.547856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.547882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-10-28 05:11:45.547991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-10-28 05:11:45.548018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.548133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.548160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.548275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.548302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.548452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.548481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.548599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.548646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.548813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.548859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.548984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.549012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.549157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.549183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.549302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.549329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.549443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.549469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.549603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.549653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.549819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.549847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.550019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.550047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.550190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.550218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.550361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.550388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.550525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.550552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.550688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.550715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.550860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.550887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.551001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.551032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.551176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.551203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.551323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.551349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.551463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.551490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.551644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.551673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.551782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.551808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.551921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.551947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.552103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.552144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.552259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.552287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.552400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.552430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.552568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.552595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.552749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.552789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.552923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.552953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.553061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.553089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.553203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.553234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.553371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.553397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.553499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.553525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.553642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.553669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.553780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.553807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.553913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.553940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.554063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.554091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.554206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.554236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-10-28 05:11:45.554379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-10-28 05:11:45.554405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.554524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.554551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.554660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.554687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.554829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.554855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.554978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.555018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.555170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.555197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.555341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.555368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.555497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.555523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.555638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.555665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.555808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.555833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.555980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.556005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.556167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.556194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.556328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.556353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.556458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.556487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.556626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.556659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.556787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.556814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.556959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.556987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.557106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.557133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.557274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.557301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.557420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.557448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.557606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.557655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.557816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.557844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.558002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.558029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.558140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.558168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.558289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.558316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.558439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.558467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.558576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.558602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.558757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.558784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.558894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.558921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.559058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.559084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.559222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.559248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.559362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.559391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.559502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.559529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.559679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.559707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.559845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.559872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.560013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.560040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.560180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.560207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.560312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.560339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.560467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.560495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.560645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.560672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-10-28 05:11:45.560783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-10-28 05:11:45.560809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.560915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.560941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.561080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.561106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.561248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.561275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.561410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.561437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.561543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.561569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.561691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.561722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.561833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.561861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.561981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.562007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.562113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.562139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.562276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.562303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.562430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.562471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.562616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.562650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.562765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.562791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.562905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.562932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.563067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.563093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.563228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.563254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.563388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.563413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.563603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.563642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.563769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.563804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.563913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.563940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.564054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.564081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.564217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.564245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.564382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.564409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.564527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.564554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.564709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.564749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.564876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.564904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.565047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.565074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.565184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.565211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.565321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.565348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.565467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.565493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.565632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.565667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.565780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.565810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.565938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.565966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.566102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.566129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.566243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.566269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.566490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.566517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.566644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.566673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.566820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.566846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.566987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.567017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.567159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-10-28 05:11:45.567187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-10-28 05:11:45.567297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.567323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.567449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.567476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.567589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.567616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.567737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.567764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.567873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.567900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.568016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.568043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.568159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.568185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.568308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.568334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.568449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.568479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.568626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.568667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.568777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.568805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.568916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.568943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.569069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.569095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.569205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.569233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.569343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.569370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.569479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.569506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.569612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.569646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.569753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.569779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.569890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.569916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.570039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.570065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.570173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.570201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.570314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.570341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.570482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.570509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.570651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.570678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.570816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.570843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.570957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.570983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.571099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.571126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.571229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.571255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.571373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.571414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.571534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.571562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.571703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.571731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.571842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.571869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.571991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.572019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.572151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.572191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.572312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.572340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-10-28 05:11:45.572463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-10-28 05:11:45.572492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.572617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.572648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.572792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.572818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.572927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.572953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.573089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.573116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.573257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.573284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.573426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.573452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.573591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.573620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.573785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.573825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.573941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.573969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.574088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.574131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.574246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.574272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.574418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.574446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.574587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.574615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.574765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.574793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.574937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.574977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.575123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.575150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.575263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.575288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.575400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.575427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.575567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.575593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.575719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.575745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.575854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.575880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.576015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.576041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.576171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.576197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.576343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.576369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.576479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.576506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.576628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.576675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.576799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.576828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.576966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.576995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.577111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.577138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.577275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.577302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.577417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.577444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.577609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.577642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.577782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.577807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.577959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.577984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.578129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.578154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.578266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.578292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.578434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.578461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.578645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.578673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.578785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.578811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-10-28 05:11:45.578920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-10-28 05:11:45.578946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.579058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.579084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.579204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.579231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.579375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.579401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.579533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.579558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.579699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.579726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.579842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.579868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.579975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.580001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.580141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.580166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.580280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.580305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.580418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.580447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.580575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.580602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.580750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.580777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.580894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.580919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.581038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.581064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.581168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.581193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.581339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.581365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.581476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.581501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.581644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.581670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.581785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.581811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.581925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.581952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.582074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.582114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.582236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.582265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.582382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.582409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.582563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.582591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.582769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.582797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.582900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.582926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.583038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.583065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.583170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.583197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.583302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.583329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.583472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.583500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.583649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.583676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.583818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.583843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.584009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.584035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.584148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.584174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.584317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.584342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.584479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.584505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.584614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.584653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.584776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.584801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.584945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.584971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-10-28 05:11:45.585113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-10-28 05:11:45.585139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.585280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.585306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.585423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.585449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.585565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.585590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.585717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.585743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.585856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.585881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.586019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.586045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.586168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.586195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.586331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.586356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.586471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.586497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.586669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.586711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.586827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.586852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.586968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.586993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.587098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.587123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.587261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.587288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.587392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.587418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.587561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.587587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.587699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.587726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.587868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.587894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.588035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.588061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.588203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.588229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.588332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.588359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.588473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.588498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.588612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.588645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.588772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.588799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.588902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.588928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.589067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.589094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.589235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.589261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.589405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.589432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.589587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.589626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.589780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.589808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.589924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.589951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.590064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.590091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.590201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.590228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.590375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.590403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.590506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.590532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.590675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.590702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.590813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.590845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.590951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.590978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.591113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.591140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.591240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-10-28 05:11:45.591267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-10-28 05:11:45.591378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.591406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.591522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.591548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.591686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.591712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.591831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.591857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.592002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.592028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.592136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.592163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.592266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.592291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.592433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.592459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.592567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.592592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.592735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.592761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.592875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.592901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.593017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.593044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.593149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.593175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.593277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.593303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.593455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.593480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.593596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.593624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.593768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.593795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.593936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.593963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.594071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.594100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.594209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.594235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.594344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.594371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.594511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.594537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.594641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.594668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.594787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.594819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.594959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.594986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.595096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.595122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.595260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.595287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.595419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.595446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.595561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.595588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.595743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.595771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.595879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.595907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.596013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.596039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.596145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.596172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.596285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.596312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.596412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.596438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-10-28 05:11:45.596559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-10-28 05:11:45.596585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.596736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.596764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.596877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.596902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.597048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.597074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.597181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.597207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.597324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.597349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.597471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.597497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.597605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.597631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.597786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.597811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.597948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.597974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.598079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.598105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.598225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.598250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.598353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.598379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.598495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.598520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.598668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.598694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.598805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.598836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.598946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.598972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.599110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.599137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.599270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.599295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.599404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.599433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.599586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.599613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.599752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.599779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.599920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.599947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.600065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.600092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.600235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.600262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.600375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.600401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.600535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.600562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.600686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.600713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.600828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.600854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.600994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.601022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:55.271 [2024-10-28 05:11:45.601131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:35:55.271 [2024-10-28 05:11:45.601160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:55.271 [2024-10-28 05:11:45.601301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.601329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:55.271 [2024-10-28 05:11:45.601472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.601500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b9 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.271 0 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.601641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.601668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.601774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.601800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.601952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.601977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.602092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.602118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.602261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-10-28 05:11:45.602286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-10-28 05:11:45.602403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.602429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.602541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.602567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.602688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.602716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.602841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.602868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.603025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.603053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.603197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.603225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.603332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.603358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.603475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.603500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.603664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.603691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.603830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.603857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.603968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.603994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.604102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.604128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.604232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.604258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.604374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.604399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.604518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.604544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.604678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.604709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.604841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.604867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.605013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.605038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.605203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.605228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.605340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.605367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.605483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.605509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.605618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.605650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.605818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.605844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.605963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.605988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.606142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.606168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.606288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.606315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.606458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.606485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.606589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.606616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.606748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.606774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.606893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.606933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.607097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.607123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.607263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.607289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.607423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.607450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.607589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.607614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.607747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.607774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.607881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.607907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.608047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.608073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.608174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.608211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.608351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.608377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.608488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-10-28 05:11:45.608514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-10-28 05:11:45.608616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.608649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.608770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.608798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.608924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.608965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.609131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.609162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.609335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.609369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.609496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.609524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.609669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.609699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.609834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.609861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.609986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.610018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.610165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.610192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.610312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.610340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.610504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.610531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.610645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.610673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.610777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.610804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.610916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.610951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.611117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.611150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.611265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.611292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.611398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.611426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.611537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.611563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.611717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.611744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.611856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.611882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.611997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.612023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.612156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.612182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.612296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.612323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.612426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.612452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.612594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.612648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.612791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.612819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.612937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.612964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.613078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.613106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.613285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.613312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.613463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.613490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.613604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.613651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.613791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.613818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.613926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.613954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.614074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.614101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.614214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.614242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.614355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.614381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.614522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.614548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.614679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.614707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.614826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-10-28 05:11:45.614855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-10-28 05:11:45.614966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.614995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.615126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.615152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.615272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.615317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.615423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.615451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.615563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.615590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.615747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.615774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.615909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.615940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.616062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.616096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.616214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.616241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.616349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.616375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.616507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.616534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.616697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.616724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.616836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.616864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.616972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.616999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.617111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.617138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.617291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.617320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.617438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.617466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.617586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.617613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.617739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.617766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.617870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.617896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.618042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.618069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.618183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.618210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.618323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.618360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.618489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.618516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.618619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.618653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.618764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.618791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.618906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.618933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.619054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.619081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.619199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.619226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:55.274 [2024-10-28 05:11:45.619343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.619371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 wit 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.274 h addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.274 [2024-10-28 05:11:45.619488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.619515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.619668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.619696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.619808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.619834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.619965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.619992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.620093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.620120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.620235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.620261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.620406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.620432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.620539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.620565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-10-28 05:11:45.620692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-10-28 05:11:45.620720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.620840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.620867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.621013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.621039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.621160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.621187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.621309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.621336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.621465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.621505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.621640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.621670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.621814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.621841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.621992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.622018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.622127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.622153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.622302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.622342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.622488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.622515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.622616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.622648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.622757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.622783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.622899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.622926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.623085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.623111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.623219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.623249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.623360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.623388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.623528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.623555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.623717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.623758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.623879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.623909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.624021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.624047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.624178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.624204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.624340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.624366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.624474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.624501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.624614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.624652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.624769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.624800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.624908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.624935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.625075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.625102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.625248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.625275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.625420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.625447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.625587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.625615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.625761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.625788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.625903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.625929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.626037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.626064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.626179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.626205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-10-28 05:11:45.626346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-10-28 05:11:45.626372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.626511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.626539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.626700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.626740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.626865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.626894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.627031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.627058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.627200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.627227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.627342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.627368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.627483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.627511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.627664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.627705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.627841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.627883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.628010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.628039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.628180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.628207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.628327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.628355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.628521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.628550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.628669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.628698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.628816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.628846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.628986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.629013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.629132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.629160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.629266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.629293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.629435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.629461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.629578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.629612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.629796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.629822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.629966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.629995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.630147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.630175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.630290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.630317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.630483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.630511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.630628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.630667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.630813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.630841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.630979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.631007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.631111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.631138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.631281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.631310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.631417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.631443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.631581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.631607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.631752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.631792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.631915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.631951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.632091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.632118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.632263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.632290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.632447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.632474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.632586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.632613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.632747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.632774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-10-28 05:11:45.632913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-10-28 05:11:45.632949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.633094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.633120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.633232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.633261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.633382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.633411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.633559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.633600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.633739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.633766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.633900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.633938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.634055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.634086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.634233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.634261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.634369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.634395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.634560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.634586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.634709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.634736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.634879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.634905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.635018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.635044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.635210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.635236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.635346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.635372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.635473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.635500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.635612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.635649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.635784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.635824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.635982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.636023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.636165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.636193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.636311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.636337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.636453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.636480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.636589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.636616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.636733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.636760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.636902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.636928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.637064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.637091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.637232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.637259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.637390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.637417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.637552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.637579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.637708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.637748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.637878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.637918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.638079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.638107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.638240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.638267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.638391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.638423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.638535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.638561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.638702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.638731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.638844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.638871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.639021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.639048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.639157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.639183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.639319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-10-28 05:11:45.639346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-10-28 05:11:45.639462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.639502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.639654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.639685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.639793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.639820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.639944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.639970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.640102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.640128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.640266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.640293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.640428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.640454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.640563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.640589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.640719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.640747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.640970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.640997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.641177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.641204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.641309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.641336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.641498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.641525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.641746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.641774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.641932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.641971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.642118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.642146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.642261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.642288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.642455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.642481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.642600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.642646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.642796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.642823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.642946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.642974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.643107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.643134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.643274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.643301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.643418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.643446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.643599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.643653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.643805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.643833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.643948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.643976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.644092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.644121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.644239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.644266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.644379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.644406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.644548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.644576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.644734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.644773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.644914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.644945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.645085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.645116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.645228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.645255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.645368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.645395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.645528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.645554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.645684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.645713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.645861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.645900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.646050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-10-28 05:11:45.646078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-10-28 05:11:45.646195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.646222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.646331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.646359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.646497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.646524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.646667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.646694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.646839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.646868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.646977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.647004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.647112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.647139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.647261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.647289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.647408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.647448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.647572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.647601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.647735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.647764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.647884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.647912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.648061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.648088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.648198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.648225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.648367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.648395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.648526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.648565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.648691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.648719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.648857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.648885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.649117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.649145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.649265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.649292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.649462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.649494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.649620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.649656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.649772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.649799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.649930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.649959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.650079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.650106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.650221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.650246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.650430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.650456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.650563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.650588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.650765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.650806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.650938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.650965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.651074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.651101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.651238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.651265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.651392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.651418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.651536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.651562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.651714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.651755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-10-28 05:11:45.651877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-10-28 05:11:45.651905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.652034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.652060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.652179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.652206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.652311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.652336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.652442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.652469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.652585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.652610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.652750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.652776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.652888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.652914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.653089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.653115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.653260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.653287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.653392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.653417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.653566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.653593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.653767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.653809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.653950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.653989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.654115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.654144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.654289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.654317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.654486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.654514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.654622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.654655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.654764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.654791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.654900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.654938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.655056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.655083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.655198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.655225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.655352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.655379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.655504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.655544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.655712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.655741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.655883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.655919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.656064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.656090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.656257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.656282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.656441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.656467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.656644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.656674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.656834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.656861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.656990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.657018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.657160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.657188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.657328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.657354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.657494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.657521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.657659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.657687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.657806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.657833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.658001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.658041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.658154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.658182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.658313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.658340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-10-28 05:11:45.658464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-10-28 05:11:45.658492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.658595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.658641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.658758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.658785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.658901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.658939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.659058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.659086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.659203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.659230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.659339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.659366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.659499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.659539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.659690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.659731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.659855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.659885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.660008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.660036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.660150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.660177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.660328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.660355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.660466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.660494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.660644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.660672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 Malloc0 00:35:55.281 [2024-10-28 05:11:45.660792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.660819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.660968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.660997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.281 [2024-10-28 05:11:45.661154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.661181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:55.281 [2024-10-28 05:11:45.661293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.661320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.281 [2024-10-28 05:11:45.661486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.661513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.281 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.661625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.661660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.661776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.661803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.661916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.661944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.662063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.662091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.662206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.662233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.662349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.662377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.662513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.662540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.662654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.662681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.662792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.662818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.662973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.663000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.663120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.663148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.663286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.663314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.663453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.663480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.663621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.663654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.663765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.663792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.663905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.663935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.664047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.664074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.664220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.664248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-10-28 05:11:45.664367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-10-28 05:11:45.664393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.664433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:55.282 [2024-10-28 05:11:45.664506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.664533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.664696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.664724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.664837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.664864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.665017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.665044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.665162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.665190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.665352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.665378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.665516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.665543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.665688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.665716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.665831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.665858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.666000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.666027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.666181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.666208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.666368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.666408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.666552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.666580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.666714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.666742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.666853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.666879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.667034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.667061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.667166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.667193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.667360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.667386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.667490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.667517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.667676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.667717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.667830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.667857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.668004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.668030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.668147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.668173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.668284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.668314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.668464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.668513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.668645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.668674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.668821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.668848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.668975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.669001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.669107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.669134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.669248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.669275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.669417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.669445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.669571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.669611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.669741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.669770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.669896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.669933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.670046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.670073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.670218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.670245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.670387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.670413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.670531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.670572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.670710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-10-28 05:11:45.670739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-10-28 05:11:45.670892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.670932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.671056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.671084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.671188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.671215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.671329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.671356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.671467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.671494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.671614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.671662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.671788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.671818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.671932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.671960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.672077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.672105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.672215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.672243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.672349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.672376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.672511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.672538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.283 [2024-10-28 05:11:45.672676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.672716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:55.283 [2024-10-28 05:11:45.672850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.672877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.283 [2024-10-28 05:11:45.672994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.673021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.283 [2024-10-28 05:11:45.673129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.673155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.673265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.673293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.673413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.673442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.673560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.673588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.673708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.673735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.673844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.673871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.673984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.674011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.674121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.674148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.674303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.674331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.674453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.674480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.674658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.674698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.674826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.674855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.674971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.675008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.675145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.675172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.675285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.675313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.675448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.675476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.675585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.675612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.675742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.675770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.675901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-10-28 05:11:45.675941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-10-28 05:11:45.676083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.676111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.676214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.676242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.676361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.676388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.676532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.676561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.676704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.676733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.676859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.676900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.677040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.677067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.677237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.677264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.677380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.677407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.677514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.677542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.677691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.677720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.677829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.677857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.678005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.678031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.678165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.678190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.678302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.678329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.678472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.678500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.678610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.678643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.678760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.678788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.678948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.678974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.679105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.679132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.679274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.679301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.679434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.679472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.679577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.679605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.679729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.679757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.679878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.679906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.680020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.680045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.680154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.680179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.680343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.680369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.680495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.680536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.284 [2024-10-28 05:11:45.680675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.680719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:55.284 [2024-10-28 05:11:45.680836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.680865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.284 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.284 [2024-10-28 05:11:45.681006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.681034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.681154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.681180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.681298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.681327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.681439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.681468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.681598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.681646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.681768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.681798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.681902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-10-28 05:11:45.681930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-10-28 05:11:45.682042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.682068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.682187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.682214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.682326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.682354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.682502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.682547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.682713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.682754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.682885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.682915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.683062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.683090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.683200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.683227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.683338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.683364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.683495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.683536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.683660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.683691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.683837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.683865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.683972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.684000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.684105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.684132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.684266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.684293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.684412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.684440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.684559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.684585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.684706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.684733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.684846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.684873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.684980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.685007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.685137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.685163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.685281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.685308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.685426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.685466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.685590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.685618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.685749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.685775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.685893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.685919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.686055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.686081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.686193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.686219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.686361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.686389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.686543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.686582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.686708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.686743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.686886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.686913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.687048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.687075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.687189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.687216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.687357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.687385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.687571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.687611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.687746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.687776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.687894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.687922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.688066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.688092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.688200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.688227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-10-28 05:11:45.688358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-10-28 05:11:45.688385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.688505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.688531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.688651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.286 [2024-10-28 05:11:45.688682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.688803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:55.286 [2024-10-28 05:11:45.688837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.286 [2024-10-28 05:11:45.688958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.688988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.286 [2024-10-28 05:11:45.689123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.689150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.689258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.689285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.689400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.689427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.689545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.689572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.689698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.689739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.689856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.689884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.690001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.690029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.690150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.690177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.690326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.690365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.690493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.690523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.690683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.690732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.690847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.690873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.691000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.691026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.691171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.691197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.691340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.691367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.691480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.691507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3390 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.691664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.691705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.691822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.691849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f08000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.691975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.692016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.692164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.692192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.692341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.692370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.692513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-10-28 05:11:45.692540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f04000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.692600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.286 [2024-10-28 05:11:45.695207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.286 [2024-10-28 05:11:45.695366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.286 [2024-10-28 05:11:45.695394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.286 [2024-10-28 05:11:45.695415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.286 [2024-10-28 05:11:45.695430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.286 [2024-10-28 05:11:45.695464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.286 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:55.286 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.286 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.286 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.286 05:11:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2480975 00:35:55.286 [2024-10-28 05:11:45.704995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.286 [2024-10-28 05:11:45.705117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.286 [2024-10-28 05:11:45.705144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.286 [2024-10-28 05:11:45.705158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.286 [2024-10-28 05:11:45.705172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.286 [2024-10-28 05:11:45.705202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.715034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.286 [2024-10-28 05:11:45.715149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.286 [2024-10-28 05:11:45.715176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.286 [2024-10-28 05:11:45.715190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.286 [2024-10-28 05:11:45.715204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.286 [2024-10-28 05:11:45.715234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-10-28 05:11:45.724971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.287 [2024-10-28 05:11:45.725089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.287 [2024-10-28 05:11:45.725114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.287 [2024-10-28 05:11:45.725129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.287 [2024-10-28 05:11:45.725143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.287 [2024-10-28 05:11:45.725173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-28 05:11:45.734954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.287 [2024-10-28 05:11:45.735077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.287 [2024-10-28 05:11:45.735102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.287 [2024-10-28 05:11:45.735117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.287 [2024-10-28 05:11:45.735131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.287 [2024-10-28 05:11:45.735161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-28 05:11:45.744899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.287 [2024-10-28 05:11:45.745015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.287 [2024-10-28 05:11:45.745041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.287 [2024-10-28 05:11:45.745055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.287 [2024-10-28 05:11:45.745069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.287 [2024-10-28 05:11:45.745099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-28 05:11:45.754922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.287 [2024-10-28 05:11:45.755049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.287 [2024-10-28 05:11:45.755076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.287 [2024-10-28 05:11:45.755096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.287 [2024-10-28 05:11:45.755112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.287 [2024-10-28 05:11:45.755156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-28 05:11:45.764911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.287 [2024-10-28 05:11:45.765036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.287 [2024-10-28 05:11:45.765062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.287 [2024-10-28 05:11:45.765077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.287 [2024-10-28 05:11:45.765090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.287 [2024-10-28 05:11:45.765120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-28 05:11:45.774940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.287 [2024-10-28 05:11:45.775060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.287 [2024-10-28 05:11:45.775092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.287 [2024-10-28 05:11:45.775107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.287 [2024-10-28 05:11:45.775122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.287 [2024-10-28 05:11:45.775152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-28 05:11:45.784951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.287 [2024-10-28 05:11:45.785071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.287 [2024-10-28 05:11:45.785098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.287 [2024-10-28 05:11:45.785113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.287 [2024-10-28 05:11:45.785130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.287 [2024-10-28 05:11:45.785159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-28 05:11:45.794964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.287 [2024-10-28 05:11:45.795081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.287 [2024-10-28 05:11:45.795107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.287 [2024-10-28 05:11:45.795121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.287 [2024-10-28 05:11:45.795135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.287 [2024-10-28 05:11:45.795164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-28 05:11:45.804997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.287 [2024-10-28 05:11:45.805112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.287 [2024-10-28 05:11:45.805136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.287 [2024-10-28 05:11:45.805150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.287 [2024-10-28 05:11:45.805163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.287 [2024-10-28 05:11:45.805192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-28 05:11:45.814960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.287 [2024-10-28 05:11:45.815069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.287 [2024-10-28 05:11:45.815096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.287 [2024-10-28 05:11:45.815115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.287 [2024-10-28 05:11:45.815130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.287 [2024-10-28 05:11:45.815159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-10-28 05:11:45.824995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.287 [2024-10-28 05:11:45.825110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.287 [2024-10-28 05:11:45.825134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.287 [2024-10-28 05:11:45.825150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.287 [2024-10-28 05:11:45.825163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.287 [2024-10-28 05:11:45.825193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.548 [2024-10-28 05:11:45.835054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.548 [2024-10-28 05:11:45.835164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.548 [2024-10-28 05:11:45.835188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.548 [2024-10-28 05:11:45.835203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.548 [2024-10-28 05:11:45.835216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.548 [2024-10-28 05:11:45.835245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.548 qpair failed and we were unable to recover it. 00:35:55.548 [2024-10-28 05:11:45.844965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.548 [2024-10-28 05:11:45.845101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.548 [2024-10-28 05:11:45.845130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.548 [2024-10-28 05:11:45.845145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.548 [2024-10-28 05:11:45.845159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.548 [2024-10-28 05:11:45.845189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.548 qpair failed and we were unable to recover it. 00:35:55.548 [2024-10-28 05:11:45.854978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.548 [2024-10-28 05:11:45.855094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.548 [2024-10-28 05:11:45.855129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.548 [2024-10-28 05:11:45.855144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.548 [2024-10-28 05:11:45.855158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.548 [2024-10-28 05:11:45.855187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.548 qpair failed and we were unable to recover it. 00:35:55.548 [2024-10-28 05:11:45.864967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.548 [2024-10-28 05:11:45.865085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.548 [2024-10-28 05:11:45.865110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.548 [2024-10-28 05:11:45.865125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.548 [2024-10-28 05:11:45.865138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.548 [2024-10-28 05:11:45.865167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.548 qpair failed and we were unable to recover it. 00:35:55.548 [2024-10-28 05:11:45.874983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.548 [2024-10-28 05:11:45.875104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.548 [2024-10-28 05:11:45.875128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.548 [2024-10-28 05:11:45.875143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.548 [2024-10-28 05:11:45.875157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.548 [2024-10-28 05:11:45.875187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.548 qpair failed and we were unable to recover it. 00:35:55.548 [2024-10-28 05:11:45.884984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.548 [2024-10-28 05:11:45.885103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.548 [2024-10-28 05:11:45.885128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.548 [2024-10-28 05:11:45.885142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.548 [2024-10-28 05:11:45.885155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.548 [2024-10-28 05:11:45.885185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.548 qpair failed and we were unable to recover it. 00:35:55.548 [2024-10-28 05:11:45.894999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.549 [2024-10-28 05:11:45.895106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.549 [2024-10-28 05:11:45.895131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.549 [2024-10-28 05:11:45.895145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.549 [2024-10-28 05:11:45.895158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.549 [2024-10-28 05:11:45.895188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.549 qpair failed and we were unable to recover it. 00:35:55.549 [2024-10-28 05:11:45.905024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.549 [2024-10-28 05:11:45.905148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.549 [2024-10-28 05:11:45.905179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.549 [2024-10-28 05:11:45.905194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.549 [2024-10-28 05:11:45.905208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.549 [2024-10-28 05:11:45.905237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.549 qpair failed and we were unable to recover it. 00:35:55.549 [2024-10-28 05:11:45.915017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.549 [2024-10-28 05:11:45.915126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.549 [2024-10-28 05:11:45.915152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.549 [2024-10-28 05:11:45.915166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.549 [2024-10-28 05:11:45.915179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.549 [2024-10-28 05:11:45.915208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.549 qpair failed and we were unable to recover it. 00:35:55.549 [2024-10-28 05:11:45.925091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.549 [2024-10-28 05:11:45.925204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.549 [2024-10-28 05:11:45.925230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.549 [2024-10-28 05:11:45.925244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.549 [2024-10-28 05:11:45.925258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.549 [2024-10-28 05:11:45.925288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.549 qpair failed and we were unable to recover it. 00:35:55.549 [2024-10-28 05:11:45.934987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.549 [2024-10-28 05:11:45.935097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.549 [2024-10-28 05:11:45.935123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.549 [2024-10-28 05:11:45.935137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.549 [2024-10-28 05:11:45.935150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.549 [2024-10-28 05:11:45.935179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.549 qpair failed and we were unable to recover it. 00:35:55.549 [2024-10-28 05:11:45.945019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.549 [2024-10-28 05:11:45.945148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.549 [2024-10-28 05:11:45.945173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.549 [2024-10-28 05:11:45.945192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.549 [2024-10-28 05:11:45.945207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.549 [2024-10-28 05:11:45.945251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.549 qpair failed and we were unable to recover it. 00:35:55.549 [2024-10-28 05:11:45.954985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.549 [2024-10-28 05:11:45.955094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.549 [2024-10-28 05:11:45.955120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.549 [2024-10-28 05:11:45.955135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.549 [2024-10-28 05:11:45.955148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.549 [2024-10-28 05:11:45.955177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.549 qpair failed and we were unable to recover it. 00:35:55.549 [2024-10-28 05:11:45.965015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.549 [2024-10-28 05:11:45.965130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.549 [2024-10-28 05:11:45.965156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.549 [2024-10-28 05:11:45.965170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.549 [2024-10-28 05:11:45.965183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.549 [2024-10-28 05:11:45.965213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.549 qpair failed and we were unable to recover it. 00:35:55.549 [2024-10-28 05:11:45.975015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.549 [2024-10-28 05:11:45.975123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.549 [2024-10-28 05:11:45.975149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.549 [2024-10-28 05:11:45.975163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.549 [2024-10-28 05:11:45.975176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.549 [2024-10-28 05:11:45.975206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.549 qpair failed and we were unable to recover it. 00:35:55.549 [2024-10-28 05:11:45.985035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.549 [2024-10-28 05:11:45.985157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.549 [2024-10-28 05:11:45.985183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.549 [2024-10-28 05:11:45.985197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.549 [2024-10-28 05:11:45.985210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.549 [2024-10-28 05:11:45.985240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.549 qpair failed and we were unable to recover it. 00:35:55.549 [2024-10-28 05:11:45.995007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.549 [2024-10-28 05:11:45.995120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.549 [2024-10-28 05:11:45.995145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.549 [2024-10-28 05:11:45.995160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.549 [2024-10-28 05:11:45.995173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.549 [2024-10-28 05:11:45.995202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.549 qpair failed and we were unable to recover it. 00:35:55.549 [2024-10-28 05:11:46.005048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.549 [2024-10-28 05:11:46.005163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.549 [2024-10-28 05:11:46.005189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.549 [2024-10-28 05:11:46.005203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.549 [2024-10-28 05:11:46.005216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.549 [2024-10-28 05:11:46.005245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.549 qpair failed and we were unable to recover it. 00:35:55.549 [2024-10-28 05:11:46.015116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.549 [2024-10-28 05:11:46.015251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.549 [2024-10-28 05:11:46.015277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.549 [2024-10-28 05:11:46.015292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.549 [2024-10-28 05:11:46.015305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.549 [2024-10-28 05:11:46.015336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.549 qpair failed and we were unable to recover it. 00:35:55.549 [2024-10-28 05:11:46.025013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.549 [2024-10-28 05:11:46.025123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.549 [2024-10-28 05:11:46.025149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.549 [2024-10-28 05:11:46.025163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.549 [2024-10-28 05:11:46.025176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.549 [2024-10-28 05:11:46.025205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.550 qpair failed and we were unable to recover it. 00:35:55.550 [2024-10-28 05:11:46.035085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.550 [2024-10-28 05:11:46.035199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.550 [2024-10-28 05:11:46.035233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.550 [2024-10-28 05:11:46.035248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.550 [2024-10-28 05:11:46.035261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.550 [2024-10-28 05:11:46.035292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.550 qpair failed and we were unable to recover it. 00:35:55.550 [2024-10-28 05:11:46.045067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.550 [2024-10-28 05:11:46.045181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.550 [2024-10-28 05:11:46.045207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.550 [2024-10-28 05:11:46.045221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.550 [2024-10-28 05:11:46.045235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.550 [2024-10-28 05:11:46.045264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.550 qpair failed and we were unable to recover it. 00:35:55.550 [2024-10-28 05:11:46.055037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.550 [2024-10-28 05:11:46.055147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.550 [2024-10-28 05:11:46.055174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.550 [2024-10-28 05:11:46.055189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.550 [2024-10-28 05:11:46.055201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.550 [2024-10-28 05:11:46.055231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.550 qpair failed and we were unable to recover it. 00:35:55.550 [2024-10-28 05:11:46.065099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.550 [2024-10-28 05:11:46.065249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.550 [2024-10-28 05:11:46.065275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.550 [2024-10-28 05:11:46.065290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.550 [2024-10-28 05:11:46.065304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.550 [2024-10-28 05:11:46.065350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.550 qpair failed and we were unable to recover it. 00:35:55.550 [2024-10-28 05:11:46.075082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.550 [2024-10-28 05:11:46.075199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.550 [2024-10-28 05:11:46.075224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.550 [2024-10-28 05:11:46.075244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.550 [2024-10-28 05:11:46.075258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.550 [2024-10-28 05:11:46.075288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.550 qpair failed and we were unable to recover it. 00:35:55.550 [2024-10-28 05:11:46.085098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.550 [2024-10-28 05:11:46.085216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.550 [2024-10-28 05:11:46.085242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.550 [2024-10-28 05:11:46.085256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.550 [2024-10-28 05:11:46.085270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.550 [2024-10-28 05:11:46.085300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.550 qpair failed and we were unable to recover it. 00:35:55.550 [2024-10-28 05:11:46.095050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.550 [2024-10-28 05:11:46.095160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.550 [2024-10-28 05:11:46.095186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.550 [2024-10-28 05:11:46.095201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.550 [2024-10-28 05:11:46.095213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.550 [2024-10-28 05:11:46.095242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.550 qpair failed and we were unable to recover it. 00:35:55.550 [2024-10-28 05:11:46.105076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.550 [2024-10-28 05:11:46.105186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.550 [2024-10-28 05:11:46.105210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.550 [2024-10-28 05:11:46.105225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.550 [2024-10-28 05:11:46.105239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.550 [2024-10-28 05:11:46.105268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.550 qpair failed and we were unable to recover it. 00:35:55.550 [2024-10-28 05:11:46.115082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.550 [2024-10-28 05:11:46.115190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.550 [2024-10-28 05:11:46.115215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.550 [2024-10-28 05:11:46.115229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.550 [2024-10-28 05:11:46.115242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.550 [2024-10-28 05:11:46.115273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.550 qpair failed and we were unable to recover it. 00:35:55.550 [2024-10-28 05:11:46.125072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.550 [2024-10-28 05:11:46.125182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.550 [2024-10-28 05:11:46.125209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.550 [2024-10-28 05:11:46.125224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.550 [2024-10-28 05:11:46.125237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.550 [2024-10-28 05:11:46.125267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.550 qpair failed and we were unable to recover it. 00:35:55.550 [2024-10-28 05:11:46.135096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.550 [2024-10-28 05:11:46.135219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.550 [2024-10-28 05:11:46.135249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.550 [2024-10-28 05:11:46.135265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.550 [2024-10-28 05:11:46.135279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.550 [2024-10-28 05:11:46.135311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.550 qpair failed and we were unable to recover it. 00:35:55.809 [2024-10-28 05:11:46.145105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.809 [2024-10-28 05:11:46.145232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.809 [2024-10-28 05:11:46.145259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.809 [2024-10-28 05:11:46.145274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.809 [2024-10-28 05:11:46.145292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.809 [2024-10-28 05:11:46.145323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-10-28 05:11:46.155105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.810 [2024-10-28 05:11:46.155215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.810 [2024-10-28 05:11:46.155242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.810 [2024-10-28 05:11:46.155257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.810 [2024-10-28 05:11:46.155269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.810 [2024-10-28 05:11:46.155301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-10-28 05:11:46.165157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.810 [2024-10-28 05:11:46.165301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.810 [2024-10-28 05:11:46.165327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.810 [2024-10-28 05:11:46.165341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.810 [2024-10-28 05:11:46.165354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.810 [2024-10-28 05:11:46.165384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-10-28 05:11:46.175170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.810 [2024-10-28 05:11:46.175340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.810 [2024-10-28 05:11:46.175366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.810 [2024-10-28 05:11:46.175381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.810 [2024-10-28 05:11:46.175394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.810 [2024-10-28 05:11:46.175423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-10-28 05:11:46.185105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.810 [2024-10-28 05:11:46.185214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.810 [2024-10-28 05:11:46.185238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.810 [2024-10-28 05:11:46.185253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.810 [2024-10-28 05:11:46.185266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.810 [2024-10-28 05:11:46.185296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-10-28 05:11:46.195110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.810 [2024-10-28 05:11:46.195219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.810 [2024-10-28 05:11:46.195244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.810 [2024-10-28 05:11:46.195260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.810 [2024-10-28 05:11:46.195273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.810 [2024-10-28 05:11:46.195303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-10-28 05:11:46.205194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.810 [2024-10-28 05:11:46.205309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.810 [2024-10-28 05:11:46.205333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.810 [2024-10-28 05:11:46.205354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.810 [2024-10-28 05:11:46.205367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.810 [2024-10-28 05:11:46.205397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-10-28 05:11:46.215163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.810 [2024-10-28 05:11:46.215281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.810 [2024-10-28 05:11:46.215307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.810 [2024-10-28 05:11:46.215321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.810 [2024-10-28 05:11:46.215335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.810 [2024-10-28 05:11:46.215364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-10-28 05:11:46.225108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.810 [2024-10-28 05:11:46.225215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.810 [2024-10-28 05:11:46.225242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.810 [2024-10-28 05:11:46.225257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.810 [2024-10-28 05:11:46.225270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.810 [2024-10-28 05:11:46.225300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-10-28 05:11:46.235141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.810 [2024-10-28 05:11:46.235254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.810 [2024-10-28 05:11:46.235278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.810 [2024-10-28 05:11:46.235292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.810 [2024-10-28 05:11:46.235305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.810 [2024-10-28 05:11:46.235336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-10-28 05:11:46.245246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.810 [2024-10-28 05:11:46.245370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.810 [2024-10-28 05:11:46.245398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.810 [2024-10-28 05:11:46.245414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.810 [2024-10-28 05:11:46.245430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.810 [2024-10-28 05:11:46.245461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-10-28 05:11:46.255155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.810 [2024-10-28 05:11:46.255267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.810 [2024-10-28 05:11:46.255292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.810 [2024-10-28 05:11:46.255307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.811 [2024-10-28 05:11:46.255320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.811 [2024-10-28 05:11:46.255349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-10-28 05:11:46.265128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.811 [2024-10-28 05:11:46.265233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.811 [2024-10-28 05:11:46.265258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.811 [2024-10-28 05:11:46.265273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.811 [2024-10-28 05:11:46.265287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.811 [2024-10-28 05:11:46.265317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-10-28 05:11:46.275156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.811 [2024-10-28 05:11:46.275269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.811 [2024-10-28 05:11:46.275294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.811 [2024-10-28 05:11:46.275309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.811 [2024-10-28 05:11:46.275322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.811 [2024-10-28 05:11:46.275352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-10-28 05:11:46.285202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.811 [2024-10-28 05:11:46.285321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.811 [2024-10-28 05:11:46.285346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.811 [2024-10-28 05:11:46.285361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.811 [2024-10-28 05:11:46.285374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.811 [2024-10-28 05:11:46.285404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-10-28 05:11:46.295178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.811 [2024-10-28 05:11:46.295298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.811 [2024-10-28 05:11:46.295322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.811 [2024-10-28 05:11:46.295337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.811 [2024-10-28 05:11:46.295350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.811 [2024-10-28 05:11:46.295380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-10-28 05:11:46.305159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.811 [2024-10-28 05:11:46.305292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.811 [2024-10-28 05:11:46.305319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.811 [2024-10-28 05:11:46.305333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.811 [2024-10-28 05:11:46.305347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.811 [2024-10-28 05:11:46.305376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-10-28 05:11:46.315198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.811 [2024-10-28 05:11:46.315310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.811 [2024-10-28 05:11:46.315334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.811 [2024-10-28 05:11:46.315349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.811 [2024-10-28 05:11:46.315362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.811 [2024-10-28 05:11:46.315392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-10-28 05:11:46.325167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.811 [2024-10-28 05:11:46.325299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.811 [2024-10-28 05:11:46.325326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.811 [2024-10-28 05:11:46.325341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.811 [2024-10-28 05:11:46.325355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.811 [2024-10-28 05:11:46.325384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-10-28 05:11:46.335155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.811 [2024-10-28 05:11:46.335262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.811 [2024-10-28 05:11:46.335286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.811 [2024-10-28 05:11:46.335307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.811 [2024-10-28 05:11:46.335321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.811 [2024-10-28 05:11:46.335350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-10-28 05:11:46.345187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.811 [2024-10-28 05:11:46.345295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.811 [2024-10-28 05:11:46.345319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.811 [2024-10-28 05:11:46.345334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.811 [2024-10-28 05:11:46.345347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.811 [2024-10-28 05:11:46.345377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-10-28 05:11:46.355181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.811 [2024-10-28 05:11:46.355288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.811 [2024-10-28 05:11:46.355314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.811 [2024-10-28 05:11:46.355329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.811 [2024-10-28 05:11:46.355341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.811 [2024-10-28 05:11:46.355371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-10-28 05:11:46.365195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.811 [2024-10-28 05:11:46.365321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.811 [2024-10-28 05:11:46.365347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.811 [2024-10-28 05:11:46.365361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.811 [2024-10-28 05:11:46.365375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.811 [2024-10-28 05:11:46.365404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-10-28 05:11:46.375182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.812 [2024-10-28 05:11:46.375320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.812 [2024-10-28 05:11:46.375345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.812 [2024-10-28 05:11:46.375360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.812 [2024-10-28 05:11:46.375373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.812 [2024-10-28 05:11:46.375409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-10-28 05:11:46.385222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.812 [2024-10-28 05:11:46.385353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.812 [2024-10-28 05:11:46.385380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.812 [2024-10-28 05:11:46.385394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.812 [2024-10-28 05:11:46.385407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.812 [2024-10-28 05:11:46.385452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-10-28 05:11:46.395213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.812 [2024-10-28 05:11:46.395323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.812 [2024-10-28 05:11:46.395349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.812 [2024-10-28 05:11:46.395364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.812 [2024-10-28 05:11:46.395377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:55.812 [2024-10-28 05:11:46.395406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.812 qpair failed and we were unable to recover it. 00:35:56.071 [2024-10-28 05:11:46.405209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.071 [2024-10-28 05:11:46.405321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.071 [2024-10-28 05:11:46.405348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.071 [2024-10-28 05:11:46.405362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.071 [2024-10-28 05:11:46.405375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.071 [2024-10-28 05:11:46.405405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.071 qpair failed and we were unable to recover it. 00:35:56.071 [2024-10-28 05:11:46.415193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.071 [2024-10-28 05:11:46.415299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.071 [2024-10-28 05:11:46.415325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.071 [2024-10-28 05:11:46.415339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.071 [2024-10-28 05:11:46.415352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.071 [2024-10-28 05:11:46.415381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.071 qpair failed and we were unable to recover it. 00:35:56.071 [2024-10-28 05:11:46.425243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.071 [2024-10-28 05:11:46.425362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.071 [2024-10-28 05:11:46.425388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.071 [2024-10-28 05:11:46.425403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.071 [2024-10-28 05:11:46.425416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.071 [2024-10-28 05:11:46.425445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.071 qpair failed and we were unable to recover it. 00:35:56.071 [2024-10-28 05:11:46.435222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.071 [2024-10-28 05:11:46.435329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.071 [2024-10-28 05:11:46.435355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.071 [2024-10-28 05:11:46.435369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.071 [2024-10-28 05:11:46.435382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.071 [2024-10-28 05:11:46.435411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.071 qpair failed and we were unable to recover it. 00:35:56.071 [2024-10-28 05:11:46.445285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.071 [2024-10-28 05:11:46.445404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.071 [2024-10-28 05:11:46.445429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.071 [2024-10-28 05:11:46.445444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.071 [2024-10-28 05:11:46.445457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.071 [2024-10-28 05:11:46.445486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.071 qpair failed and we were unable to recover it. 00:35:56.072 [2024-10-28 05:11:46.455226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-10-28 05:11:46.455331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-10-28 05:11:46.455357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-10-28 05:11:46.455371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-10-28 05:11:46.455384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.072 [2024-10-28 05:11:46.455414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-10-28 05:11:46.465299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-10-28 05:11:46.465407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-10-28 05:11:46.465433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-10-28 05:11:46.465453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-10-28 05:11:46.465468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.072 [2024-10-28 05:11:46.465498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-10-28 05:11:46.475294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-10-28 05:11:46.475409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-10-28 05:11:46.475436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-10-28 05:11:46.475451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-10-28 05:11:46.475464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.072 [2024-10-28 05:11:46.475494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-10-28 05:11:46.485247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-10-28 05:11:46.485362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-10-28 05:11:46.485388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-10-28 05:11:46.485402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-10-28 05:11:46.485415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.072 [2024-10-28 05:11:46.485444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-10-28 05:11:46.495260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-10-28 05:11:46.495379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-10-28 05:11:46.495405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-10-28 05:11:46.495419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-10-28 05:11:46.495432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.072 [2024-10-28 05:11:46.495462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-10-28 05:11:46.505267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-10-28 05:11:46.505374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-10-28 05:11:46.505401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-10-28 05:11:46.505416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-10-28 05:11:46.505428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.072 [2024-10-28 05:11:46.505464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-10-28 05:11:46.515274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-10-28 05:11:46.515390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-10-28 05:11:46.515415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-10-28 05:11:46.515430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-10-28 05:11:46.515443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.072 [2024-10-28 05:11:46.515474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-10-28 05:11:46.525295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-10-28 05:11:46.525434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-10-28 05:11:46.525460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-10-28 05:11:46.525475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-10-28 05:11:46.525488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.072 [2024-10-28 05:11:46.525518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-10-28 05:11:46.535226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-10-28 05:11:46.535351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-10-28 05:11:46.535377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-10-28 05:11:46.535392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-10-28 05:11:46.535405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.072 [2024-10-28 05:11:46.535434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-10-28 05:11:46.545249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-10-28 05:11:46.545357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-10-28 05:11:46.545383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-10-28 05:11:46.545398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-10-28 05:11:46.545411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.072 [2024-10-28 05:11:46.545441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-10-28 05:11:46.555287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-10-28 05:11:46.555407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-10-28 05:11:46.555431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-10-28 05:11:46.555446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-10-28 05:11:46.555459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.072 [2024-10-28 05:11:46.555489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.073 [2024-10-28 05:11:46.565259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-10-28 05:11:46.565375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-10-28 05:11:46.565399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-10-28 05:11:46.565414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-10-28 05:11:46.565427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.073 [2024-10-28 05:11:46.565456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-10-28 05:11:46.575256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-10-28 05:11:46.575397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-10-28 05:11:46.575425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-10-28 05:11:46.575440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-10-28 05:11:46.575453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.073 [2024-10-28 05:11:46.575482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-10-28 05:11:46.585278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-10-28 05:11:46.585407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-10-28 05:11:46.585433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-10-28 05:11:46.585449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-10-28 05:11:46.585462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.073 [2024-10-28 05:11:46.585491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-10-28 05:11:46.595315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-10-28 05:11:46.595440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-10-28 05:11:46.595466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-10-28 05:11:46.595490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-10-28 05:11:46.595507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.073 [2024-10-28 05:11:46.595552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-10-28 05:11:46.605294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-10-28 05:11:46.605442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-10-28 05:11:46.605468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-10-28 05:11:46.605483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-10-28 05:11:46.605497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.073 [2024-10-28 05:11:46.605526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-10-28 05:11:46.615308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-10-28 05:11:46.615421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-10-28 05:11:46.615446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-10-28 05:11:46.615460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-10-28 05:11:46.615474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.073 [2024-10-28 05:11:46.615505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-10-28 05:11:46.625297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-10-28 05:11:46.625413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-10-28 05:11:46.625437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-10-28 05:11:46.625452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-10-28 05:11:46.625466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.073 [2024-10-28 05:11:46.625496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-10-28 05:11:46.635300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-10-28 05:11:46.635405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-10-28 05:11:46.635430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-10-28 05:11:46.635444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-10-28 05:11:46.635457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.073 [2024-10-28 05:11:46.635492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-10-28 05:11:46.645323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-10-28 05:11:46.645442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-10-28 05:11:46.645469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-10-28 05:11:46.645483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-10-28 05:11:46.645497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.073 [2024-10-28 05:11:46.645527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-10-28 05:11:46.655343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-10-28 05:11:46.655468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-10-28 05:11:46.655494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-10-28 05:11:46.655509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-10-28 05:11:46.655522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.073 [2024-10-28 05:11:46.655551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.333 [2024-10-28 05:11:46.665295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.333 [2024-10-28 05:11:46.665406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.333 [2024-10-28 05:11:46.665430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.333 [2024-10-28 05:11:46.665445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.333 [2024-10-28 05:11:46.665459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.333 [2024-10-28 05:11:46.665489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.333 qpair failed and we were unable to recover it. 00:35:56.333 [2024-10-28 05:11:46.675296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.333 [2024-10-28 05:11:46.675408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.333 [2024-10-28 05:11:46.675432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.333 [2024-10-28 05:11:46.675446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.333 [2024-10-28 05:11:46.675459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.333 [2024-10-28 05:11:46.675489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.333 qpair failed and we were unable to recover it. 00:35:56.333 [2024-10-28 05:11:46.685301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.333 [2024-10-28 05:11:46.685425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.333 [2024-10-28 05:11:46.685449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.333 [2024-10-28 05:11:46.685463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.333 [2024-10-28 05:11:46.685477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.333 [2024-10-28 05:11:46.685508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.333 qpair failed and we were unable to recover it. 00:35:56.333 [2024-10-28 05:11:46.695314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.333 [2024-10-28 05:11:46.695421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.333 [2024-10-28 05:11:46.695445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.333 [2024-10-28 05:11:46.695459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.334 [2024-10-28 05:11:46.695473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.334 [2024-10-28 05:11:46.695502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.334 qpair failed and we were unable to recover it. 00:35:56.334 [2024-10-28 05:11:46.705381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.334 [2024-10-28 05:11:46.705525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.334 [2024-10-28 05:11:46.705552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.334 [2024-10-28 05:11:46.705567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.334 [2024-10-28 05:11:46.705581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.334 [2024-10-28 05:11:46.705610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.334 qpair failed and we were unable to recover it. 00:35:56.334 [2024-10-28 05:11:46.715306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.334 [2024-10-28 05:11:46.715416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.334 [2024-10-28 05:11:46.715440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.334 [2024-10-28 05:11:46.715455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.334 [2024-10-28 05:11:46.715469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.334 [2024-10-28 05:11:46.715498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.334 qpair failed and we were unable to recover it. 00:35:56.334 [2024-10-28 05:11:46.725353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.334 [2024-10-28 05:11:46.725477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.334 [2024-10-28 05:11:46.725503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.334 [2024-10-28 05:11:46.725524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.334 [2024-10-28 05:11:46.725538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.334 [2024-10-28 05:11:46.725568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.334 qpair failed and we were unable to recover it. 00:35:56.334 [2024-10-28 05:11:46.735356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.334 [2024-10-28 05:11:46.735465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.334 [2024-10-28 05:11:46.735490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.334 [2024-10-28 05:11:46.735504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.334 [2024-10-28 05:11:46.735518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.334 [2024-10-28 05:11:46.735548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.334 qpair failed and we were unable to recover it. 00:35:56.334 [2024-10-28 05:11:46.745321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.334 [2024-10-28 05:11:46.745433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.334 [2024-10-28 05:11:46.745458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.334 [2024-10-28 05:11:46.745472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.334 [2024-10-28 05:11:46.745487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.334 [2024-10-28 05:11:46.745518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.334 qpair failed and we were unable to recover it. 00:35:56.334 [2024-10-28 05:11:46.755368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.334 [2024-10-28 05:11:46.755530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.334 [2024-10-28 05:11:46.755557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.334 [2024-10-28 05:11:46.755572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.334 [2024-10-28 05:11:46.755600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.334 [2024-10-28 05:11:46.755628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.334 qpair failed and we were unable to recover it. 00:35:56.334 [2024-10-28 05:11:46.765366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.334 [2024-10-28 05:11:46.765481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.334 [2024-10-28 05:11:46.765505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.334 [2024-10-28 05:11:46.765520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.334 [2024-10-28 05:11:46.765533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.334 [2024-10-28 05:11:46.765568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.334 qpair failed and we were unable to recover it. 00:35:56.334 [2024-10-28 05:11:46.775364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.334 [2024-10-28 05:11:46.775514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.334 [2024-10-28 05:11:46.775541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.334 [2024-10-28 05:11:46.775555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.334 [2024-10-28 05:11:46.775569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.334 [2024-10-28 05:11:46.775597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.334 qpair failed and we were unable to recover it. 00:35:56.334 [2024-10-28 05:11:46.785382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.334 [2024-10-28 05:11:46.785538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.334 [2024-10-28 05:11:46.785566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.334 [2024-10-28 05:11:46.785583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.334 [2024-10-28 05:11:46.785599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.334 [2024-10-28 05:11:46.785629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.334 qpair failed and we were unable to recover it. 00:35:56.334 [2024-10-28 05:11:46.795346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.334 [2024-10-28 05:11:46.795474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.334 [2024-10-28 05:11:46.795501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.334 [2024-10-28 05:11:46.795516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.334 [2024-10-28 05:11:46.795529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.334 [2024-10-28 05:11:46.795558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.334 qpair failed and we were unable to recover it. 00:35:56.334 [2024-10-28 05:11:46.805407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.334 [2024-10-28 05:11:46.805525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.335 [2024-10-28 05:11:46.805551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.335 [2024-10-28 05:11:46.805566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.335 [2024-10-28 05:11:46.805579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.335 [2024-10-28 05:11:46.805609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.335 qpair failed and we were unable to recover it. 00:35:56.335 [2024-10-28 05:11:46.815376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.335 [2024-10-28 05:11:46.815504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.335 [2024-10-28 05:11:46.815530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.335 [2024-10-28 05:11:46.815544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.335 [2024-10-28 05:11:46.815558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.335 [2024-10-28 05:11:46.815588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.335 qpair failed and we were unable to recover it. 00:35:56.335 [2024-10-28 05:11:46.825345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.335 [2024-10-28 05:11:46.825455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.335 [2024-10-28 05:11:46.825481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.335 [2024-10-28 05:11:46.825495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.335 [2024-10-28 05:11:46.825507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.335 [2024-10-28 05:11:46.825538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.335 qpair failed and we were unable to recover it. 00:35:56.335 [2024-10-28 05:11:46.835367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.335 [2024-10-28 05:11:46.835476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.335 [2024-10-28 05:11:46.835503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.335 [2024-10-28 05:11:46.835517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.335 [2024-10-28 05:11:46.835531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.335 [2024-10-28 05:11:46.835560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.335 qpair failed and we were unable to recover it. 00:35:56.335 [2024-10-28 05:11:46.845366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.335 [2024-10-28 05:11:46.845480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.335 [2024-10-28 05:11:46.845507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.335 [2024-10-28 05:11:46.845521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.335 [2024-10-28 05:11:46.845534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.335 [2024-10-28 05:11:46.845564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.335 qpair failed and we were unable to recover it. 00:35:56.335 [2024-10-28 05:11:46.855368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.335 [2024-10-28 05:11:46.855479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.335 [2024-10-28 05:11:46.855506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.335 [2024-10-28 05:11:46.855526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.335 [2024-10-28 05:11:46.855541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.335 [2024-10-28 05:11:46.855571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.335 qpair failed and we were unable to recover it. 00:35:56.335 [2024-10-28 05:11:46.865384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.335 [2024-10-28 05:11:46.865491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.335 [2024-10-28 05:11:46.865517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.335 [2024-10-28 05:11:46.865532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.335 [2024-10-28 05:11:46.865545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.335 [2024-10-28 05:11:46.865575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.335 qpair failed and we were unable to recover it. 00:35:56.335 [2024-10-28 05:11:46.875401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.335 [2024-10-28 05:11:46.875557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.335 [2024-10-28 05:11:46.875583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.335 [2024-10-28 05:11:46.875597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.335 [2024-10-28 05:11:46.875610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.335 [2024-10-28 05:11:46.875650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.335 qpair failed and we were unable to recover it. 00:35:56.335 [2024-10-28 05:11:46.885398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.335 [2024-10-28 05:11:46.885514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.335 [2024-10-28 05:11:46.885541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.335 [2024-10-28 05:11:46.885555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.335 [2024-10-28 05:11:46.885568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.335 [2024-10-28 05:11:46.885599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.335 qpair failed and we were unable to recover it. 00:35:56.335 [2024-10-28 05:11:46.895383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.335 [2024-10-28 05:11:46.895493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.335 [2024-10-28 05:11:46.895519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.335 [2024-10-28 05:11:46.895533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.335 [2024-10-28 05:11:46.895546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.335 [2024-10-28 05:11:46.895582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.335 qpair failed and we were unable to recover it. 00:35:56.335 [2024-10-28 05:11:46.905403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.335 [2024-10-28 05:11:46.905571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.335 [2024-10-28 05:11:46.905598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.335 [2024-10-28 05:11:46.905627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.335 [2024-10-28 05:11:46.905648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.335 [2024-10-28 05:11:46.905693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.335 qpair failed and we were unable to recover it. 00:35:56.335 [2024-10-28 05:11:46.915376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.335 [2024-10-28 05:11:46.915523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.336 [2024-10-28 05:11:46.915548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.336 [2024-10-28 05:11:46.915563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.336 [2024-10-28 05:11:46.915578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.336 [2024-10-28 05:11:46.915607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.336 qpair failed and we were unable to recover it. 00:35:56.336 [2024-10-28 05:11:46.925435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.336 [2024-10-28 05:11:46.925612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.336 [2024-10-28 05:11:46.925649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.336 [2024-10-28 05:11:46.925671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.336 [2024-10-28 05:11:46.925685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.336 [2024-10-28 05:11:46.925717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.336 qpair failed and we were unable to recover it. 00:35:56.595 [2024-10-28 05:11:46.935413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.595 [2024-10-28 05:11:46.935525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.595 [2024-10-28 05:11:46.935551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.595 [2024-10-28 05:11:46.935566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.595 [2024-10-28 05:11:46.935583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.595 [2024-10-28 05:11:46.935613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.595 qpair failed and we were unable to recover it. 00:35:56.595 [2024-10-28 05:11:46.945382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.595 [2024-10-28 05:11:46.945496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.595 [2024-10-28 05:11:46.945521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.595 [2024-10-28 05:11:46.945535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.595 [2024-10-28 05:11:46.945549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.595 [2024-10-28 05:11:46.945579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.595 qpair failed and we were unable to recover it. 00:35:56.595 [2024-10-28 05:11:46.955425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.595 [2024-10-28 05:11:46.955556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.595 [2024-10-28 05:11:46.955582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.596 [2024-10-28 05:11:46.955596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.596 [2024-10-28 05:11:46.955609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.596 [2024-10-28 05:11:46.955647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.596 qpair failed and we were unable to recover it. 00:35:56.596 [2024-10-28 05:11:46.965421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.596 [2024-10-28 05:11:46.965580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.596 [2024-10-28 05:11:46.965605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.596 [2024-10-28 05:11:46.965619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.596 [2024-10-28 05:11:46.965632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.596 [2024-10-28 05:11:46.965672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.596 qpair failed and we were unable to recover it. 00:35:56.596 [2024-10-28 05:11:46.975424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.596 [2024-10-28 05:11:46.975544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.596 [2024-10-28 05:11:46.975570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.596 [2024-10-28 05:11:46.975585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.596 [2024-10-28 05:11:46.975597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.596 [2024-10-28 05:11:46.975627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.596 qpair failed and we were unable to recover it. 00:35:56.596 [2024-10-28 05:11:46.985425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.596 [2024-10-28 05:11:46.985558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.596 [2024-10-28 05:11:46.985584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.596 [2024-10-28 05:11:46.985608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.596 [2024-10-28 05:11:46.985623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.596 [2024-10-28 05:11:46.985662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.596 qpair failed and we were unable to recover it. 00:35:56.596 [2024-10-28 05:11:46.995429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.596 [2024-10-28 05:11:46.995583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.596 [2024-10-28 05:11:46.995609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.596 [2024-10-28 05:11:46.995624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.596 [2024-10-28 05:11:46.995644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.596 [2024-10-28 05:11:46.995676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.596 qpair failed and we were unable to recover it. 00:35:56.596 [2024-10-28 05:11:47.005417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.596 [2024-10-28 05:11:47.005526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.596 [2024-10-28 05:11:47.005551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.596 [2024-10-28 05:11:47.005566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.596 [2024-10-28 05:11:47.005579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.596 [2024-10-28 05:11:47.005609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.596 qpair failed and we were unable to recover it. 00:35:56.596 [2024-10-28 05:11:47.015440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.596 [2024-10-28 05:11:47.015553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.596 [2024-10-28 05:11:47.015580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.596 [2024-10-28 05:11:47.015595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.596 [2024-10-28 05:11:47.015608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.596 [2024-10-28 05:11:47.015646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.596 qpair failed and we were unable to recover it. 00:35:56.596 [2024-10-28 05:11:47.025428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.596 [2024-10-28 05:11:47.025538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.596 [2024-10-28 05:11:47.025564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.596 [2024-10-28 05:11:47.025578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.596 [2024-10-28 05:11:47.025592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.596 [2024-10-28 05:11:47.025628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.596 qpair failed and we were unable to recover it. 00:35:56.596 [2024-10-28 05:11:47.035436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.596 [2024-10-28 05:11:47.035541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.596 [2024-10-28 05:11:47.035566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.596 [2024-10-28 05:11:47.035580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.596 [2024-10-28 05:11:47.035593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.596 [2024-10-28 05:11:47.035623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.596 qpair failed and we were unable to recover it. 00:35:56.596 [2024-10-28 05:11:47.045566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.596 [2024-10-28 05:11:47.045692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.596 [2024-10-28 05:11:47.045720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.596 [2024-10-28 05:11:47.045735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.596 [2024-10-28 05:11:47.045752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.596 [2024-10-28 05:11:47.045784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.596 qpair failed and we were unable to recover it. 00:35:56.596 [2024-10-28 05:11:47.055451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.596 [2024-10-28 05:11:47.055564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.596 [2024-10-28 05:11:47.055591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.596 [2024-10-28 05:11:47.055606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.596 [2024-10-28 05:11:47.055619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.596 [2024-10-28 05:11:47.055657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.596 qpair failed and we were unable to recover it. 00:35:56.596 [2024-10-28 05:11:47.065479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.597 [2024-10-28 05:11:47.065592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.597 [2024-10-28 05:11:47.065618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.597 [2024-10-28 05:11:47.065643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.597 [2024-10-28 05:11:47.065660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.597 [2024-10-28 05:11:47.065690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.597 qpair failed and we were unable to recover it. 00:35:56.597 [2024-10-28 05:11:47.075530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.597 [2024-10-28 05:11:47.075662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.597 [2024-10-28 05:11:47.075689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.597 [2024-10-28 05:11:47.075704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.597 [2024-10-28 05:11:47.075717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.597 [2024-10-28 05:11:47.075748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.597 qpair failed and we were unable to recover it. 00:35:56.597 [2024-10-28 05:11:47.085510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.597 [2024-10-28 05:11:47.085626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.597 [2024-10-28 05:11:47.085662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.597 [2024-10-28 05:11:47.085676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.597 [2024-10-28 05:11:47.085689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.597 [2024-10-28 05:11:47.085720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.597 qpair failed and we were unable to recover it. 00:35:56.597 [2024-10-28 05:11:47.095480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.597 [2024-10-28 05:11:47.095644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.597 [2024-10-28 05:11:47.095671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.597 [2024-10-28 05:11:47.095687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.597 [2024-10-28 05:11:47.095700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.597 [2024-10-28 05:11:47.095730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.597 qpair failed and we were unable to recover it. 00:35:56.597 [2024-10-28 05:11:47.105483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.597 [2024-10-28 05:11:47.105592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.597 [2024-10-28 05:11:47.105618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.597 [2024-10-28 05:11:47.105641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.597 [2024-10-28 05:11:47.105657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.597 [2024-10-28 05:11:47.105686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.597 qpair failed and we were unable to recover it. 00:35:56.597 [2024-10-28 05:11:47.115485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.597 [2024-10-28 05:11:47.115603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.597 [2024-10-28 05:11:47.115630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.597 [2024-10-28 05:11:47.115661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.597 [2024-10-28 05:11:47.115677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.597 [2024-10-28 05:11:47.115707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.597 qpair failed and we were unable to recover it. 00:35:56.597 [2024-10-28 05:11:47.125510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.597 [2024-10-28 05:11:47.125623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.597 [2024-10-28 05:11:47.125656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.597 [2024-10-28 05:11:47.125671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.597 [2024-10-28 05:11:47.125684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.597 [2024-10-28 05:11:47.125715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.597 qpair failed and we were unable to recover it. 00:35:56.597 [2024-10-28 05:11:47.135500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.597 [2024-10-28 05:11:47.135612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.597 [2024-10-28 05:11:47.135647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.597 [2024-10-28 05:11:47.135664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.597 [2024-10-28 05:11:47.135678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.597 [2024-10-28 05:11:47.135708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.597 qpair failed and we were unable to recover it. 00:35:56.597 [2024-10-28 05:11:47.145512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.597 [2024-10-28 05:11:47.145652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.597 [2024-10-28 05:11:47.145679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.597 [2024-10-28 05:11:47.145693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.597 [2024-10-28 05:11:47.145706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.597 [2024-10-28 05:11:47.145737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.597 qpair failed and we were unable to recover it. 00:35:56.597 [2024-10-28 05:11:47.155502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.597 [2024-10-28 05:11:47.155670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.597 [2024-10-28 05:11:47.155697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.597 [2024-10-28 05:11:47.155712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.597 [2024-10-28 05:11:47.155731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.597 [2024-10-28 05:11:47.155769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.597 qpair failed and we were unable to recover it. 00:35:56.597 [2024-10-28 05:11:47.165521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.597 [2024-10-28 05:11:47.165656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.597 [2024-10-28 05:11:47.165682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.597 [2024-10-28 05:11:47.165697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.597 [2024-10-28 05:11:47.165709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.597 [2024-10-28 05:11:47.165740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.597 qpair failed and we were unable to recover it. 00:35:56.597 [2024-10-28 05:11:47.175537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.597 [2024-10-28 05:11:47.175655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.597 [2024-10-28 05:11:47.175682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.598 [2024-10-28 05:11:47.175696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.598 [2024-10-28 05:11:47.175710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.598 [2024-10-28 05:11:47.175740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.598 qpair failed and we were unable to recover it. 00:35:56.598 [2024-10-28 05:11:47.185625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.598 [2024-10-28 05:11:47.185767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.598 [2024-10-28 05:11:47.185793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.598 [2024-10-28 05:11:47.185807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.598 [2024-10-28 05:11:47.185821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.598 [2024-10-28 05:11:47.185850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.598 qpair failed and we were unable to recover it. 00:35:56.858 [2024-10-28 05:11:47.195537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.858 [2024-10-28 05:11:47.195664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.858 [2024-10-28 05:11:47.195690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.858 [2024-10-28 05:11:47.195704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.858 [2024-10-28 05:11:47.195717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.858 [2024-10-28 05:11:47.195746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.858 qpair failed and we were unable to recover it. 00:35:56.858 [2024-10-28 05:11:47.205524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.858 [2024-10-28 05:11:47.205652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.858 [2024-10-28 05:11:47.205678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.858 [2024-10-28 05:11:47.205692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.858 [2024-10-28 05:11:47.205706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.858 [2024-10-28 05:11:47.205735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.858 qpair failed and we were unable to recover it. 00:35:56.858 [2024-10-28 05:11:47.215529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.858 [2024-10-28 05:11:47.215654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.858 [2024-10-28 05:11:47.215680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.858 [2024-10-28 05:11:47.215694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.858 [2024-10-28 05:11:47.215707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.858 [2024-10-28 05:11:47.215735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.858 qpair failed and we were unable to recover it. 00:35:56.858 [2024-10-28 05:11:47.225515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.858 [2024-10-28 05:11:47.225627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.858 [2024-10-28 05:11:47.225661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.858 [2024-10-28 05:11:47.225676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.858 [2024-10-28 05:11:47.225689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.858 [2024-10-28 05:11:47.225720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.858 qpair failed and we were unable to recover it. 00:35:56.858 [2024-10-28 05:11:47.235525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.858 [2024-10-28 05:11:47.235657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.858 [2024-10-28 05:11:47.235683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.858 [2024-10-28 05:11:47.235699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.858 [2024-10-28 05:11:47.235713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.858 [2024-10-28 05:11:47.235742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.858 qpair failed and we were unable to recover it. 00:35:56.858 [2024-10-28 05:11:47.245550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.858 [2024-10-28 05:11:47.245690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.858 [2024-10-28 05:11:47.245716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.858 [2024-10-28 05:11:47.245736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.858 [2024-10-28 05:11:47.245751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.858 [2024-10-28 05:11:47.245781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.858 qpair failed and we were unable to recover it. 00:35:56.858 [2024-10-28 05:11:47.255572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.858 [2024-10-28 05:11:47.255707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.858 [2024-10-28 05:11:47.255732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.858 [2024-10-28 05:11:47.255745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.858 [2024-10-28 05:11:47.255758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.858 [2024-10-28 05:11:47.255786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.858 qpair failed and we were unable to recover it. 00:35:56.858 [2024-10-28 05:11:47.265599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.858 [2024-10-28 05:11:47.265745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.858 [2024-10-28 05:11:47.265771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.858 [2024-10-28 05:11:47.265785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.858 [2024-10-28 05:11:47.265799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.858 [2024-10-28 05:11:47.265828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.858 qpair failed and we were unable to recover it. 00:35:56.858 [2024-10-28 05:11:47.275527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.858 [2024-10-28 05:11:47.275667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.859 [2024-10-28 05:11:47.275693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.859 [2024-10-28 05:11:47.275707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.859 [2024-10-28 05:11:47.275721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.859 [2024-10-28 05:11:47.275749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.859 qpair failed and we were unable to recover it. 00:35:56.859 [2024-10-28 05:11:47.285581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.859 [2024-10-28 05:11:47.285710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.859 [2024-10-28 05:11:47.285736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.859 [2024-10-28 05:11:47.285750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.859 [2024-10-28 05:11:47.285763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.859 [2024-10-28 05:11:47.285799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.859 qpair failed and we were unable to recover it. 00:35:56.859 [2024-10-28 05:11:47.295656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.859 [2024-10-28 05:11:47.295785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.859 [2024-10-28 05:11:47.295811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.859 [2024-10-28 05:11:47.295825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.859 [2024-10-28 05:11:47.295839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.859 [2024-10-28 05:11:47.295868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.859 qpair failed and we were unable to recover it. 00:35:56.859 [2024-10-28 05:11:47.305575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.859 [2024-10-28 05:11:47.305737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.859 [2024-10-28 05:11:47.305763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.859 [2024-10-28 05:11:47.305777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.859 [2024-10-28 05:11:47.305791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.859 [2024-10-28 05:11:47.305822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.859 qpair failed and we were unable to recover it. 00:35:56.859 [2024-10-28 05:11:47.315561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.859 [2024-10-28 05:11:47.315695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.859 [2024-10-28 05:11:47.315723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.859 [2024-10-28 05:11:47.315738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.859 [2024-10-28 05:11:47.315752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.859 [2024-10-28 05:11:47.315783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.859 qpair failed and we were unable to recover it. 00:35:56.859 [2024-10-28 05:11:47.325561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.859 [2024-10-28 05:11:47.325693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.859 [2024-10-28 05:11:47.325720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.859 [2024-10-28 05:11:47.325734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.859 [2024-10-28 05:11:47.325748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.859 [2024-10-28 05:11:47.325778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.859 qpair failed and we were unable to recover it. 00:35:56.859 [2024-10-28 05:11:47.335606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.859 [2024-10-28 05:11:47.335780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.859 [2024-10-28 05:11:47.335807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.859 [2024-10-28 05:11:47.335821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.859 [2024-10-28 05:11:47.335834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.859 [2024-10-28 05:11:47.335864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.859 qpair failed and we were unable to recover it. 00:35:56.859 [2024-10-28 05:11:47.345571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.859 [2024-10-28 05:11:47.345697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.859 [2024-10-28 05:11:47.345722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.859 [2024-10-28 05:11:47.345737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.859 [2024-10-28 05:11:47.345751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.859 [2024-10-28 05:11:47.345782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.859 qpair failed and we were unable to recover it. 00:35:56.859 [2024-10-28 05:11:47.355594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.859 [2024-10-28 05:11:47.355728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.859 [2024-10-28 05:11:47.355754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.859 [2024-10-28 05:11:47.355769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.859 [2024-10-28 05:11:47.355783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.859 [2024-10-28 05:11:47.355813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.859 qpair failed and we were unable to recover it. 00:35:56.859 [2024-10-28 05:11:47.365593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.859 [2024-10-28 05:11:47.365728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.859 [2024-10-28 05:11:47.365755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.859 [2024-10-28 05:11:47.365769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.859 [2024-10-28 05:11:47.365783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.859 [2024-10-28 05:11:47.365812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.859 qpair failed and we were unable to recover it. 00:35:56.859 [2024-10-28 05:11:47.375620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.859 [2024-10-28 05:11:47.375736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.859 [2024-10-28 05:11:47.375763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.859 [2024-10-28 05:11:47.375783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.859 [2024-10-28 05:11:47.375798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.859 [2024-10-28 05:11:47.375827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.859 qpair failed and we were unable to recover it. 00:35:56.859 [2024-10-28 05:11:47.385614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.859 [2024-10-28 05:11:47.385778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.859 [2024-10-28 05:11:47.385804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.859 [2024-10-28 05:11:47.385819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.859 [2024-10-28 05:11:47.385833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.859 [2024-10-28 05:11:47.385862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.859 qpair failed and we were unable to recover it. 00:35:56.859 [2024-10-28 05:11:47.395611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.859 [2024-10-28 05:11:47.395725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.859 [2024-10-28 05:11:47.395751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.859 [2024-10-28 05:11:47.395765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.859 [2024-10-28 05:11:47.395779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.859 [2024-10-28 05:11:47.395809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.859 qpair failed and we were unable to recover it. 00:35:56.859 [2024-10-28 05:11:47.405607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.859 [2024-10-28 05:11:47.405731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.859 [2024-10-28 05:11:47.405757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.859 [2024-10-28 05:11:47.405772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.859 [2024-10-28 05:11:47.405786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.859 [2024-10-28 05:11:47.405817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.859 qpair failed and we were unable to recover it. 00:35:56.859 [2024-10-28 05:11:47.415696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.860 [2024-10-28 05:11:47.415813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.860 [2024-10-28 05:11:47.415840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.860 [2024-10-28 05:11:47.415854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.860 [2024-10-28 05:11:47.415869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.860 [2024-10-28 05:11:47.415904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.860 qpair failed and we were unable to recover it. 00:35:56.860 [2024-10-28 05:11:47.425607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.860 [2024-10-28 05:11:47.425728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.860 [2024-10-28 05:11:47.425754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.860 [2024-10-28 05:11:47.425768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.860 [2024-10-28 05:11:47.425782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.860 [2024-10-28 05:11:47.425811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.860 qpair failed and we were unable to recover it. 00:35:56.860 [2024-10-28 05:11:47.435622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.860 [2024-10-28 05:11:47.435747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.860 [2024-10-28 05:11:47.435773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.860 [2024-10-28 05:11:47.435787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.860 [2024-10-28 05:11:47.435800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.860 [2024-10-28 05:11:47.435830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.860 qpair failed and we were unable to recover it. 00:35:56.860 [2024-10-28 05:11:47.445674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.860 [2024-10-28 05:11:47.445799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.860 [2024-10-28 05:11:47.445825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.860 [2024-10-28 05:11:47.445840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.860 [2024-10-28 05:11:47.445854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:56.860 [2024-10-28 05:11:47.445883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.860 qpair failed and we were unable to recover it. 00:35:57.119 [2024-10-28 05:11:47.455667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.119 [2024-10-28 05:11:47.455807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.119 [2024-10-28 05:11:47.455835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.119 [2024-10-28 05:11:47.455850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.119 [2024-10-28 05:11:47.455877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:35:57.119 [2024-10-28 05:11:47.455908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.119 qpair failed and we were unable to recover it. 00:35:57.119 [2024-10-28 05:11:47.465674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.119 [2024-10-28 05:11:47.465794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.119 [2024-10-28 05:11:47.465827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.119 [2024-10-28 05:11:47.465844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.119 [2024-10-28 05:11:47.465859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.119 [2024-10-28 05:11:47.465891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.119 qpair failed and we were unable to recover it. 00:35:57.119 [2024-10-28 05:11:47.475647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.119 [2024-10-28 05:11:47.475765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.119 [2024-10-28 05:11:47.475793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.120 [2024-10-28 05:11:47.475811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.120 [2024-10-28 05:11:47.475826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.120 [2024-10-28 05:11:47.475857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.120 qpair failed and we were unable to recover it. 00:35:57.120 [2024-10-28 05:11:47.485665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.120 [2024-10-28 05:11:47.485786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.120 [2024-10-28 05:11:47.485814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.120 [2024-10-28 05:11:47.485828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.120 [2024-10-28 05:11:47.485843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.120 [2024-10-28 05:11:47.485874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.120 qpair failed and we were unable to recover it. 00:35:57.120 [2024-10-28 05:11:47.495664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.120 [2024-10-28 05:11:47.495782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.120 [2024-10-28 05:11:47.495810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.120 [2024-10-28 05:11:47.495825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.120 [2024-10-28 05:11:47.495840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.120 [2024-10-28 05:11:47.495871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.120 qpair failed and we were unable to recover it. 00:35:57.120 [2024-10-28 05:11:47.505652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.120 [2024-10-28 05:11:47.505760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.120 [2024-10-28 05:11:47.505792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.120 [2024-10-28 05:11:47.505808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.120 [2024-10-28 05:11:47.505822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.120 [2024-10-28 05:11:47.505854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.120 qpair failed and we were unable to recover it. 00:35:57.120 [2024-10-28 05:11:47.515688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.120 [2024-10-28 05:11:47.515818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.120 [2024-10-28 05:11:47.515845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.120 [2024-10-28 05:11:47.515860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.120 [2024-10-28 05:11:47.515877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.120 [2024-10-28 05:11:47.515907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.120 qpair failed and we were unable to recover it. 00:35:57.120 [2024-10-28 05:11:47.525693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.120 [2024-10-28 05:11:47.525822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.120 [2024-10-28 05:11:47.525849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.120 [2024-10-28 05:11:47.525864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.120 [2024-10-28 05:11:47.525878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.120 [2024-10-28 05:11:47.525911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.120 qpair failed and we were unable to recover it. 00:35:57.120 [2024-10-28 05:11:47.535661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.120 [2024-10-28 05:11:47.535780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.120 [2024-10-28 05:11:47.535807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.120 [2024-10-28 05:11:47.535822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.120 [2024-10-28 05:11:47.535836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.120 [2024-10-28 05:11:47.535867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.120 qpair failed and we were unable to recover it. 00:35:57.120 [2024-10-28 05:11:47.545677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.120 [2024-10-28 05:11:47.545787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.120 [2024-10-28 05:11:47.545814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.120 [2024-10-28 05:11:47.545830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.120 [2024-10-28 05:11:47.545850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.120 [2024-10-28 05:11:47.545897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.120 qpair failed and we were unable to recover it. 00:35:57.120 [2024-10-28 05:11:47.555671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.120 [2024-10-28 05:11:47.555790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.120 [2024-10-28 05:11:47.555817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.120 [2024-10-28 05:11:47.555831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.120 [2024-10-28 05:11:47.555846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.120 [2024-10-28 05:11:47.555877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.120 qpair failed and we were unable to recover it. 00:35:57.120 [2024-10-28 05:11:47.565705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.120 [2024-10-28 05:11:47.565831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.120 [2024-10-28 05:11:47.565859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.120 [2024-10-28 05:11:47.565874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.120 [2024-10-28 05:11:47.565893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.120 [2024-10-28 05:11:47.565939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.120 qpair failed and we were unable to recover it. 00:35:57.120 [2024-10-28 05:11:47.575706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.120 [2024-10-28 05:11:47.575820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.120 [2024-10-28 05:11:47.575847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.120 [2024-10-28 05:11:47.575862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.120 [2024-10-28 05:11:47.575876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.120 [2024-10-28 05:11:47.575909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.120 qpair failed and we were unable to recover it. 00:35:57.120 [2024-10-28 05:11:47.585690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.120 [2024-10-28 05:11:47.585804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.120 [2024-10-28 05:11:47.585831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.120 [2024-10-28 05:11:47.585846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.120 [2024-10-28 05:11:47.585860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.120 [2024-10-28 05:11:47.585891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.120 qpair failed and we were unable to recover it. 00:35:57.120 [2024-10-28 05:11:47.595686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.120 [2024-10-28 05:11:47.595811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.120 [2024-10-28 05:11:47.595837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.120 [2024-10-28 05:11:47.595853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.120 [2024-10-28 05:11:47.595868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.120 [2024-10-28 05:11:47.595901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.120 qpair failed and we were unable to recover it. 00:35:57.120 [2024-10-28 05:11:47.605761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.120 [2024-10-28 05:11:47.605937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.120 [2024-10-28 05:11:47.605966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.120 [2024-10-28 05:11:47.605982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.120 [2024-10-28 05:11:47.606001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.120 [2024-10-28 05:11:47.606046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.120 qpair failed and we were unable to recover it. 00:35:57.121 [2024-10-28 05:11:47.615739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.121 [2024-10-28 05:11:47.615903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.121 [2024-10-28 05:11:47.615930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.121 [2024-10-28 05:11:47.615946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.121 [2024-10-28 05:11:47.615960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.121 [2024-10-28 05:11:47.615991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.121 qpair failed and we were unable to recover it. 00:35:57.121 [2024-10-28 05:11:47.625725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.121 [2024-10-28 05:11:47.625849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.121 [2024-10-28 05:11:47.625875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.121 [2024-10-28 05:11:47.625890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.121 [2024-10-28 05:11:47.625904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.121 [2024-10-28 05:11:47.625936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.121 qpair failed and we were unable to recover it. 00:35:57.121 [2024-10-28 05:11:47.635711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.121 [2024-10-28 05:11:47.635823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.121 [2024-10-28 05:11:47.635854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.121 [2024-10-28 05:11:47.635870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.121 [2024-10-28 05:11:47.635884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.121 [2024-10-28 05:11:47.635915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.121 qpair failed and we were unable to recover it. 00:35:57.121 [2024-10-28 05:11:47.645717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.121 [2024-10-28 05:11:47.645832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.121 [2024-10-28 05:11:47.645859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.121 [2024-10-28 05:11:47.645880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.121 [2024-10-28 05:11:47.645893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.121 [2024-10-28 05:11:47.645924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.121 qpair failed and we were unable to recover it. 00:35:57.121 [2024-10-28 05:11:47.655735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.121 [2024-10-28 05:11:47.655863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.121 [2024-10-28 05:11:47.655890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.121 [2024-10-28 05:11:47.655905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.121 [2024-10-28 05:11:47.655919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.121 [2024-10-28 05:11:47.655951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.121 qpair failed and we were unable to recover it. 00:35:57.121 [2024-10-28 05:11:47.665721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.121 [2024-10-28 05:11:47.665845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.121 [2024-10-28 05:11:47.665871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.121 [2024-10-28 05:11:47.665886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.121 [2024-10-28 05:11:47.665901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.121 [2024-10-28 05:11:47.665932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.121 qpair failed and we were unable to recover it. 00:35:57.121 [2024-10-28 05:11:47.675717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.121 [2024-10-28 05:11:47.675831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.121 [2024-10-28 05:11:47.675858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.121 [2024-10-28 05:11:47.675873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.121 [2024-10-28 05:11:47.675893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.121 [2024-10-28 05:11:47.675926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.121 qpair failed and we were unable to recover it. 00:35:57.121 [2024-10-28 05:11:47.685736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.121 [2024-10-28 05:11:47.685867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.121 [2024-10-28 05:11:47.685894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.121 [2024-10-28 05:11:47.685909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.121 [2024-10-28 05:11:47.685923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.121 [2024-10-28 05:11:47.685954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.121 qpair failed and we were unable to recover it. 00:35:57.121 [2024-10-28 05:11:47.695753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.121 [2024-10-28 05:11:47.695918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.121 [2024-10-28 05:11:47.695945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.121 [2024-10-28 05:11:47.695960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.121 [2024-10-28 05:11:47.695975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.121 [2024-10-28 05:11:47.696005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.121 qpair failed and we were unable to recover it. 00:35:57.121 [2024-10-28 05:11:47.705768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.121 [2024-10-28 05:11:47.705883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.121 [2024-10-28 05:11:47.705909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.121 [2024-10-28 05:11:47.705924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.121 [2024-10-28 05:11:47.705938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.121 [2024-10-28 05:11:47.705970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.121 qpair failed and we were unable to recover it. 00:35:57.380 [2024-10-28 05:11:47.715818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.380 [2024-10-28 05:11:47.715933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.380 [2024-10-28 05:11:47.715960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.380 [2024-10-28 05:11:47.715975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.380 [2024-10-28 05:11:47.715989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.380 [2024-10-28 05:11:47.716019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.380 qpair failed and we were unable to recover it. 00:35:57.380 [2024-10-28 05:11:47.725765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.380 [2024-10-28 05:11:47.725889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.380 [2024-10-28 05:11:47.725916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.380 [2024-10-28 05:11:47.725931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.380 [2024-10-28 05:11:47.725945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.380 [2024-10-28 05:11:47.725977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.380 qpair failed and we were unable to recover it. 00:35:57.380 [2024-10-28 05:11:47.735773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.380 [2024-10-28 05:11:47.735900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.380 [2024-10-28 05:11:47.735928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.380 [2024-10-28 05:11:47.735942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.380 [2024-10-28 05:11:47.735957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.380 [2024-10-28 05:11:47.735988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.380 qpair failed and we were unable to recover it. 00:35:57.381 [2024-10-28 05:11:47.745746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.381 [2024-10-28 05:11:47.745893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.381 [2024-10-28 05:11:47.745919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.381 [2024-10-28 05:11:47.745935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.381 [2024-10-28 05:11:47.745950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.381 [2024-10-28 05:11:47.745980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.381 qpair failed and we were unable to recover it. 00:35:57.381 [2024-10-28 05:11:47.755764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.381 [2024-10-28 05:11:47.755879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.381 [2024-10-28 05:11:47.755905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.381 [2024-10-28 05:11:47.755921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.381 [2024-10-28 05:11:47.755936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.381 [2024-10-28 05:11:47.755967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.381 qpair failed and we were unable to recover it. 00:35:57.381 [2024-10-28 05:11:47.765764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.381 [2024-10-28 05:11:47.765900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.381 [2024-10-28 05:11:47.765931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.381 [2024-10-28 05:11:47.765947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.381 [2024-10-28 05:11:47.765961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.381 [2024-10-28 05:11:47.765993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.381 qpair failed and we were unable to recover it. 00:35:57.381 [2024-10-28 05:11:47.775862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.381 [2024-10-28 05:11:47.775984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.381 [2024-10-28 05:11:47.776010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.381 [2024-10-28 05:11:47.776026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.381 [2024-10-28 05:11:47.776040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.381 [2024-10-28 05:11:47.776071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.381 qpair failed and we were unable to recover it. 00:35:57.381 [2024-10-28 05:11:47.785753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.381 [2024-10-28 05:11:47.785874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.381 [2024-10-28 05:11:47.785900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.381 [2024-10-28 05:11:47.785915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.381 [2024-10-28 05:11:47.785929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.381 [2024-10-28 05:11:47.785960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.381 qpair failed and we were unable to recover it. 00:35:57.381 [2024-10-28 05:11:47.795766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.381 [2024-10-28 05:11:47.795874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.381 [2024-10-28 05:11:47.795901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.381 [2024-10-28 05:11:47.795916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.381 [2024-10-28 05:11:47.795930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.381 [2024-10-28 05:11:47.795960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.381 qpair failed and we were unable to recover it. 00:35:57.381 [2024-10-28 05:11:47.805760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.381 [2024-10-28 05:11:47.805886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.381 [2024-10-28 05:11:47.805913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.381 [2024-10-28 05:11:47.805935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.381 [2024-10-28 05:11:47.805951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.381 [2024-10-28 05:11:47.805983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.381 qpair failed and we were unable to recover it. 00:35:57.381 [2024-10-28 05:11:47.815794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.381 [2024-10-28 05:11:47.815919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.381 [2024-10-28 05:11:47.815945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.381 [2024-10-28 05:11:47.815959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.381 [2024-10-28 05:11:47.815974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.381 [2024-10-28 05:11:47.816005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.381 qpair failed and we were unable to recover it. 00:35:57.381 [2024-10-28 05:11:47.825859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.381 [2024-10-28 05:11:47.825978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.381 [2024-10-28 05:11:47.826006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.381 [2024-10-28 05:11:47.826021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.381 [2024-10-28 05:11:47.826036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.381 [2024-10-28 05:11:47.826067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.381 qpair failed and we were unable to recover it. 00:35:57.381 [2024-10-28 05:11:47.835759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.381 [2024-10-28 05:11:47.835880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.381 [2024-10-28 05:11:47.835907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.381 [2024-10-28 05:11:47.835923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.381 [2024-10-28 05:11:47.835938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.381 [2024-10-28 05:11:47.835968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.381 qpair failed and we were unable to recover it. 00:35:57.381 [2024-10-28 05:11:47.845815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.381 [2024-10-28 05:11:47.845939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.381 [2024-10-28 05:11:47.845975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.381 [2024-10-28 05:11:47.845990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.381 [2024-10-28 05:11:47.846004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.381 [2024-10-28 05:11:47.846048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.381 qpair failed and we were unable to recover it. 00:35:57.381 [2024-10-28 05:11:47.855829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.381 [2024-10-28 05:11:47.855954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.381 [2024-10-28 05:11:47.855981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.381 [2024-10-28 05:11:47.855996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.381 [2024-10-28 05:11:47.856011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.381 [2024-10-28 05:11:47.856041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.381 qpair failed and we were unable to recover it. 00:35:57.381 [2024-10-28 05:11:47.865795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.381 [2024-10-28 05:11:47.865911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.381 [2024-10-28 05:11:47.865938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.381 [2024-10-28 05:11:47.865953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.381 [2024-10-28 05:11:47.865968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.381 [2024-10-28 05:11:47.865998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.381 qpair failed and we were unable to recover it. 00:35:57.381 [2024-10-28 05:11:47.875795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.381 [2024-10-28 05:11:47.875932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.381 [2024-10-28 05:11:47.875959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.381 [2024-10-28 05:11:47.875973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.381 [2024-10-28 05:11:47.875987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.382 [2024-10-28 05:11:47.876018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.382 qpair failed and we were unable to recover it. 00:35:57.382 [2024-10-28 05:11:47.885919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.382 [2024-10-28 05:11:47.886041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.382 [2024-10-28 05:11:47.886067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.382 [2024-10-28 05:11:47.886082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.382 [2024-10-28 05:11:47.886097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.382 [2024-10-28 05:11:47.886128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.382 qpair failed and we were unable to recover it. 00:35:57.382 [2024-10-28 05:11:47.895843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.382 [2024-10-28 05:11:47.895966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.382 [2024-10-28 05:11:47.895993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.382 [2024-10-28 05:11:47.896008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.382 [2024-10-28 05:11:47.896023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.382 [2024-10-28 05:11:47.896053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.382 qpair failed and we were unable to recover it. 00:35:57.382 [2024-10-28 05:11:47.905811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.382 [2024-10-28 05:11:47.905920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.382 [2024-10-28 05:11:47.905946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.382 [2024-10-28 05:11:47.905961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.382 [2024-10-28 05:11:47.905975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.382 [2024-10-28 05:11:47.906006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.382 qpair failed and we were unable to recover it. 00:35:57.382 [2024-10-28 05:11:47.915934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.382 [2024-10-28 05:11:47.916051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.382 [2024-10-28 05:11:47.916083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.382 [2024-10-28 05:11:47.916101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.382 [2024-10-28 05:11:47.916116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.382 [2024-10-28 05:11:47.916149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.382 qpair failed and we were unable to recover it. 00:35:57.382 [2024-10-28 05:11:47.925836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.382 [2024-10-28 05:11:47.925997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.382 [2024-10-28 05:11:47.926024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.382 [2024-10-28 05:11:47.926039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.382 [2024-10-28 05:11:47.926054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.382 [2024-10-28 05:11:47.926097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.382 qpair failed and we were unable to recover it. 00:35:57.382 [2024-10-28 05:11:47.935848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.382 [2024-10-28 05:11:47.935971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.382 [2024-10-28 05:11:47.935997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.382 [2024-10-28 05:11:47.936019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.382 [2024-10-28 05:11:47.936034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.382 [2024-10-28 05:11:47.936066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.382 qpair failed and we were unable to recover it. 00:35:57.382 [2024-10-28 05:11:47.945868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.382 [2024-10-28 05:11:47.946001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.382 [2024-10-28 05:11:47.946027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.382 [2024-10-28 05:11:47.946042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.382 [2024-10-28 05:11:47.946056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.382 [2024-10-28 05:11:47.946087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.382 qpair failed and we were unable to recover it. 00:35:57.382 [2024-10-28 05:11:47.955820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.382 [2024-10-28 05:11:47.955936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.382 [2024-10-28 05:11:47.955963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.382 [2024-10-28 05:11:47.955978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.382 [2024-10-28 05:11:47.955992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.382 [2024-10-28 05:11:47.956022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.382 qpair failed and we were unable to recover it. 00:35:57.382 [2024-10-28 05:11:47.965879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.382 [2024-10-28 05:11:47.966005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.382 [2024-10-28 05:11:47.966032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.382 [2024-10-28 05:11:47.966046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.382 [2024-10-28 05:11:47.966061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.382 [2024-10-28 05:11:47.966091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.382 qpair failed and we were unable to recover it. 00:35:57.641 [2024-10-28 05:11:47.975919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.641 [2024-10-28 05:11:47.976066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.641 [2024-10-28 05:11:47.976093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.641 [2024-10-28 05:11:47.976108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.641 [2024-10-28 05:11:47.976122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.641 [2024-10-28 05:11:47.976176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.641 qpair failed and we were unable to recover it. 00:35:57.641 [2024-10-28 05:11:47.985865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.641 [2024-10-28 05:11:47.985980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.641 [2024-10-28 05:11:47.986007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.641 [2024-10-28 05:11:47.986022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.641 [2024-10-28 05:11:47.986037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.641 [2024-10-28 05:11:47.986079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.641 qpair failed and we were unable to recover it. 00:35:57.641 [2024-10-28 05:11:47.995841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.641 [2024-10-28 05:11:47.995959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.641 [2024-10-28 05:11:47.995986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.641 [2024-10-28 05:11:47.996001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.641 [2024-10-28 05:11:47.996015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.641 [2024-10-28 05:11:47.996057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.641 qpair failed and we were unable to recover it. 00:35:57.641 [2024-10-28 05:11:48.005859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.641 [2024-10-28 05:11:48.005991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.641 [2024-10-28 05:11:48.006018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.641 [2024-10-28 05:11:48.006034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.641 [2024-10-28 05:11:48.006047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.641 [2024-10-28 05:11:48.006078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.641 qpair failed and we were unable to recover it. 00:35:57.642 [2024-10-28 05:11:48.015894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.642 [2024-10-28 05:11:48.016021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.642 [2024-10-28 05:11:48.016048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.642 [2024-10-28 05:11:48.016064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.642 [2024-10-28 05:11:48.016078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.642 [2024-10-28 05:11:48.016109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.642 qpair failed and we were unable to recover it. 00:35:57.642 [2024-10-28 05:11:48.025883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.642 [2024-10-28 05:11:48.026014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.642 [2024-10-28 05:11:48.026041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.642 [2024-10-28 05:11:48.026057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.642 [2024-10-28 05:11:48.026070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.642 [2024-10-28 05:11:48.026101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.642 qpair failed and we were unable to recover it. 00:35:57.642 [2024-10-28 05:11:48.035868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.642 [2024-10-28 05:11:48.035986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.642 [2024-10-28 05:11:48.036014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.642 [2024-10-28 05:11:48.036032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.642 [2024-10-28 05:11:48.036047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.642 [2024-10-28 05:11:48.036078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.642 qpair failed and we were unable to recover it. 00:35:57.642 [2024-10-28 05:11:48.045993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.642 [2024-10-28 05:11:48.046122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.642 [2024-10-28 05:11:48.046151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.642 [2024-10-28 05:11:48.046167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.642 [2024-10-28 05:11:48.046182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.642 [2024-10-28 05:11:48.046212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.642 qpair failed and we were unable to recover it. 00:35:57.642 [2024-10-28 05:11:48.055900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.642 [2024-10-28 05:11:48.056005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.642 [2024-10-28 05:11:48.056031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.642 [2024-10-28 05:11:48.056045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.642 [2024-10-28 05:11:48.056059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.642 [2024-10-28 05:11:48.056089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.642 qpair failed and we were unable to recover it. 00:35:57.642 [2024-10-28 05:11:48.065910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.642 [2024-10-28 05:11:48.066027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.642 [2024-10-28 05:11:48.066060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.642 [2024-10-28 05:11:48.066080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.642 [2024-10-28 05:11:48.066093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.642 [2024-10-28 05:11:48.066125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.642 qpair failed and we were unable to recover it. 00:35:57.642 [2024-10-28 05:11:48.075913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.642 [2024-10-28 05:11:48.076032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.642 [2024-10-28 05:11:48.076060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.642 [2024-10-28 05:11:48.076076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.642 [2024-10-28 05:11:48.076092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.642 [2024-10-28 05:11:48.076124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.642 qpair failed and we were unable to recover it. 00:35:57.642 [2024-10-28 05:11:48.085901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.642 [2024-10-28 05:11:48.086021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.642 [2024-10-28 05:11:48.086048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.642 [2024-10-28 05:11:48.086064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.642 [2024-10-28 05:11:48.086077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.642 [2024-10-28 05:11:48.086108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.642 qpair failed and we were unable to recover it. 00:35:57.642 [2024-10-28 05:11:48.095960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.642 [2024-10-28 05:11:48.096076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.642 [2024-10-28 05:11:48.096101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.642 [2024-10-28 05:11:48.096116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.642 [2024-10-28 05:11:48.096130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.642 [2024-10-28 05:11:48.096161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.642 qpair failed and we were unable to recover it. 00:35:57.642 [2024-10-28 05:11:48.105892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.642 [2024-10-28 05:11:48.106028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.642 [2024-10-28 05:11:48.106055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.642 [2024-10-28 05:11:48.106070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.642 [2024-10-28 05:11:48.106090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.642 [2024-10-28 05:11:48.106121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.642 qpair failed and we were unable to recover it. 00:35:57.642 [2024-10-28 05:11:48.115960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.642 [2024-10-28 05:11:48.116107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.642 [2024-10-28 05:11:48.116134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.642 [2024-10-28 05:11:48.116150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.642 [2024-10-28 05:11:48.116163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.642 [2024-10-28 05:11:48.116194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.642 qpair failed and we were unable to recover it. 00:35:57.642 [2024-10-28 05:11:48.125938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.642 [2024-10-28 05:11:48.126053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.642 [2024-10-28 05:11:48.126079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.642 [2024-10-28 05:11:48.126094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.642 [2024-10-28 05:11:48.126108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.642 [2024-10-28 05:11:48.126138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.642 qpair failed and we were unable to recover it. 00:35:57.642 [2024-10-28 05:11:48.135942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.642 [2024-10-28 05:11:48.136071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.642 [2024-10-28 05:11:48.136098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.642 [2024-10-28 05:11:48.136114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.642 [2024-10-28 05:11:48.136128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.642 [2024-10-28 05:11:48.136160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.642 qpair failed and we were unable to recover it. 00:35:57.642 [2024-10-28 05:11:48.145962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.642 [2024-10-28 05:11:48.146094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.642 [2024-10-28 05:11:48.146121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.642 [2024-10-28 05:11:48.146136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.643 [2024-10-28 05:11:48.146150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.643 [2024-10-28 05:11:48.146180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.643 qpair failed and we were unable to recover it. 00:35:57.643 [2024-10-28 05:11:48.155959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.643 [2024-10-28 05:11:48.156072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.643 [2024-10-28 05:11:48.156098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.643 [2024-10-28 05:11:48.156113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.643 [2024-10-28 05:11:48.156127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.643 [2024-10-28 05:11:48.156170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.643 qpair failed and we were unable to recover it. 00:35:57.643 [2024-10-28 05:11:48.165981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.643 [2024-10-28 05:11:48.166102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.643 [2024-10-28 05:11:48.166131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.643 [2024-10-28 05:11:48.166146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.643 [2024-10-28 05:11:48.166160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.643 [2024-10-28 05:11:48.166190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.643 qpair failed and we were unable to recover it. 00:35:57.643 [2024-10-28 05:11:48.175955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.643 [2024-10-28 05:11:48.176065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.643 [2024-10-28 05:11:48.176091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.643 [2024-10-28 05:11:48.176106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.643 [2024-10-28 05:11:48.176119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.643 [2024-10-28 05:11:48.176151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.643 qpair failed and we were unable to recover it. 00:35:57.643 [2024-10-28 05:11:48.185934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.643 [2024-10-28 05:11:48.186043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.643 [2024-10-28 05:11:48.186068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.643 [2024-10-28 05:11:48.186083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.643 [2024-10-28 05:11:48.186097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.643 [2024-10-28 05:11:48.186127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.643 qpair failed and we were unable to recover it. 00:35:57.643 [2024-10-28 05:11:48.195992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.643 [2024-10-28 05:11:48.196125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.643 [2024-10-28 05:11:48.196158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.643 [2024-10-28 05:11:48.196174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.643 [2024-10-28 05:11:48.196188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.643 [2024-10-28 05:11:48.196218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.643 qpair failed and we were unable to recover it. 00:35:57.643 [2024-10-28 05:11:48.205957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.643 [2024-10-28 05:11:48.206080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.643 [2024-10-28 05:11:48.206106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.643 [2024-10-28 05:11:48.206121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.643 [2024-10-28 05:11:48.206134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.643 [2024-10-28 05:11:48.206165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.643 qpair failed and we were unable to recover it. 00:35:57.643 [2024-10-28 05:11:48.215972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.643 [2024-10-28 05:11:48.216088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.643 [2024-10-28 05:11:48.216114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.643 [2024-10-28 05:11:48.216129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.643 [2024-10-28 05:11:48.216142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.643 [2024-10-28 05:11:48.216172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.643 qpair failed and we were unable to recover it. 00:35:57.643 [2024-10-28 05:11:48.226058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.643 [2024-10-28 05:11:48.226169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.643 [2024-10-28 05:11:48.226196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.643 [2024-10-28 05:11:48.226211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.643 [2024-10-28 05:11:48.226225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.643 [2024-10-28 05:11:48.226257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.643 qpair failed and we were unable to recover it. 00:35:57.902 [2024-10-28 05:11:48.236022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.902 [2024-10-28 05:11:48.236150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.902 [2024-10-28 05:11:48.236176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.902 [2024-10-28 05:11:48.236192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.902 [2024-10-28 05:11:48.236213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.902 [2024-10-28 05:11:48.236260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.902 qpair failed and we were unable to recover it. 00:35:57.902 [2024-10-28 05:11:48.246025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.902 [2024-10-28 05:11:48.246154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.902 [2024-10-28 05:11:48.246180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.902 [2024-10-28 05:11:48.246195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.902 [2024-10-28 05:11:48.246209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.902 [2024-10-28 05:11:48.246239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.902 qpair failed and we were unable to recover it. 00:35:57.902 [2024-10-28 05:11:48.255997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.902 [2024-10-28 05:11:48.256107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.902 [2024-10-28 05:11:48.256133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.902 [2024-10-28 05:11:48.256148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.902 [2024-10-28 05:11:48.256161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.902 [2024-10-28 05:11:48.256204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.902 qpair failed and we were unable to recover it. 00:35:57.902 [2024-10-28 05:11:48.266088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.902 [2024-10-28 05:11:48.266249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.902 [2024-10-28 05:11:48.266277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.902 [2024-10-28 05:11:48.266297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.902 [2024-10-28 05:11:48.266312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.902 [2024-10-28 05:11:48.266359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.902 qpair failed and we were unable to recover it. 00:35:57.902 [2024-10-28 05:11:48.276068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.902 [2024-10-28 05:11:48.276193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.902 [2024-10-28 05:11:48.276220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.902 [2024-10-28 05:11:48.276235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.902 [2024-10-28 05:11:48.276248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.902 [2024-10-28 05:11:48.276294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.902 qpair failed and we were unable to recover it. 00:35:57.902 [2024-10-28 05:11:48.286018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.902 [2024-10-28 05:11:48.286179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.902 [2024-10-28 05:11:48.286206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.902 [2024-10-28 05:11:48.286221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.902 [2024-10-28 05:11:48.286234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.902 [2024-10-28 05:11:48.286265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.902 qpair failed and we were unable to recover it. 00:35:57.902 [2024-10-28 05:11:48.296106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.902 [2024-10-28 05:11:48.296234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.902 [2024-10-28 05:11:48.296264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.902 [2024-10-28 05:11:48.296280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.902 [2024-10-28 05:11:48.296294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.902 [2024-10-28 05:11:48.296327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.902 qpair failed and we were unable to recover it. 00:35:57.902 [2024-10-28 05:11:48.306015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.902 [2024-10-28 05:11:48.306139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.902 [2024-10-28 05:11:48.306167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.902 [2024-10-28 05:11:48.306182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.902 [2024-10-28 05:11:48.306196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.902 [2024-10-28 05:11:48.306228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.902 qpair failed and we were unable to recover it. 00:35:57.902 [2024-10-28 05:11:48.316036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.902 [2024-10-28 05:11:48.316165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.902 [2024-10-28 05:11:48.316193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.902 [2024-10-28 05:11:48.316208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.902 [2024-10-28 05:11:48.316222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.902 [2024-10-28 05:11:48.316254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.902 qpair failed and we were unable to recover it. 00:35:57.902 [2024-10-28 05:11:48.326000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.902 [2024-10-28 05:11:48.326117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.902 [2024-10-28 05:11:48.326149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.902 [2024-10-28 05:11:48.326165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.902 [2024-10-28 05:11:48.326178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.902 [2024-10-28 05:11:48.326210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.902 qpair failed and we were unable to recover it. 00:35:57.902 [2024-10-28 05:11:48.336007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.903 [2024-10-28 05:11:48.336137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.903 [2024-10-28 05:11:48.336163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.903 [2024-10-28 05:11:48.336178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.903 [2024-10-28 05:11:48.336192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.903 [2024-10-28 05:11:48.336224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.903 qpair failed and we were unable to recover it. 00:35:57.903 [2024-10-28 05:11:48.346027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.903 [2024-10-28 05:11:48.346140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.903 [2024-10-28 05:11:48.346167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.903 [2024-10-28 05:11:48.346181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.903 [2024-10-28 05:11:48.346194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.903 [2024-10-28 05:11:48.346225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.903 qpair failed and we were unable to recover it. 00:35:57.903 [2024-10-28 05:11:48.356005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.903 [2024-10-28 05:11:48.356160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.903 [2024-10-28 05:11:48.356187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.903 [2024-10-28 05:11:48.356202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.903 [2024-10-28 05:11:48.356216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.903 [2024-10-28 05:11:48.356247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.903 qpair failed and we were unable to recover it. 00:35:57.903 [2024-10-28 05:11:48.366032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.903 [2024-10-28 05:11:48.366163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.903 [2024-10-28 05:11:48.366189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.903 [2024-10-28 05:11:48.366210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.903 [2024-10-28 05:11:48.366226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.903 [2024-10-28 05:11:48.366256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.903 qpair failed and we were unable to recover it. 00:35:57.903 [2024-10-28 05:11:48.376072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.903 [2024-10-28 05:11:48.376184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.903 [2024-10-28 05:11:48.376211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.903 [2024-10-28 05:11:48.376226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.903 [2024-10-28 05:11:48.376239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.903 [2024-10-28 05:11:48.376283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.903 qpair failed and we were unable to recover it. 00:35:57.903 [2024-10-28 05:11:48.386074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.903 [2024-10-28 05:11:48.386187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.903 [2024-10-28 05:11:48.386213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.903 [2024-10-28 05:11:48.386229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.903 [2024-10-28 05:11:48.386243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.903 [2024-10-28 05:11:48.386275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.903 qpair failed and we were unable to recover it. 00:35:57.903 [2024-10-28 05:11:48.396086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.903 [2024-10-28 05:11:48.396229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.903 [2024-10-28 05:11:48.396255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.903 [2024-10-28 05:11:48.396271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.903 [2024-10-28 05:11:48.396285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.903 [2024-10-28 05:11:48.396316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.903 qpair failed and we were unable to recover it. 00:35:57.903 [2024-10-28 05:11:48.406039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.903 [2024-10-28 05:11:48.406152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.903 [2024-10-28 05:11:48.406177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.903 [2024-10-28 05:11:48.406192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.903 [2024-10-28 05:11:48.406205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.903 [2024-10-28 05:11:48.406242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.903 qpair failed and we were unable to recover it. 00:35:57.903 [2024-10-28 05:11:48.416116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.903 [2024-10-28 05:11:48.416223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.903 [2024-10-28 05:11:48.416248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.903 [2024-10-28 05:11:48.416264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.903 [2024-10-28 05:11:48.416279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.903 [2024-10-28 05:11:48.416310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.903 qpair failed and we were unable to recover it. 00:35:57.903 [2024-10-28 05:11:48.426022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.903 [2024-10-28 05:11:48.426136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.903 [2024-10-28 05:11:48.426161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.903 [2024-10-28 05:11:48.426175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.903 [2024-10-28 05:11:48.426190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.903 [2024-10-28 05:11:48.426220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.903 qpair failed and we were unable to recover it. 00:35:57.903 [2024-10-28 05:11:48.436023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.903 [2024-10-28 05:11:48.436127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.903 [2024-10-28 05:11:48.436153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.903 [2024-10-28 05:11:48.436167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.903 [2024-10-28 05:11:48.436181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.903 [2024-10-28 05:11:48.436225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.903 qpair failed and we were unable to recover it. 00:35:57.903 [2024-10-28 05:11:48.446041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.903 [2024-10-28 05:11:48.446170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.903 [2024-10-28 05:11:48.446199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.903 [2024-10-28 05:11:48.446215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.903 [2024-10-28 05:11:48.446229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.903 [2024-10-28 05:11:48.446262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.903 qpair failed and we were unable to recover it. 00:35:57.903 [2024-10-28 05:11:48.456078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.903 [2024-10-28 05:11:48.456205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.903 [2024-10-28 05:11:48.456231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.903 [2024-10-28 05:11:48.456246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.903 [2024-10-28 05:11:48.456260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.903 [2024-10-28 05:11:48.456291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.903 qpair failed and we were unable to recover it. 00:35:57.903 [2024-10-28 05:11:48.466071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.903 [2024-10-28 05:11:48.466185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.903 [2024-10-28 05:11:48.466211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.903 [2024-10-28 05:11:48.466225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.903 [2024-10-28 05:11:48.466239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.904 [2024-10-28 05:11:48.466270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.904 qpair failed and we were unable to recover it. 00:35:57.904 [2024-10-28 05:11:48.476067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.904 [2024-10-28 05:11:48.476179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.904 [2024-10-28 05:11:48.476205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.904 [2024-10-28 05:11:48.476219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.904 [2024-10-28 05:11:48.476233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.904 [2024-10-28 05:11:48.476277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.904 qpair failed and we were unable to recover it. 00:35:57.904 [2024-10-28 05:11:48.486064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.904 [2024-10-28 05:11:48.486181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.904 [2024-10-28 05:11:48.486206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.904 [2024-10-28 05:11:48.486221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.904 [2024-10-28 05:11:48.486234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:57.904 [2024-10-28 05:11:48.486265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:57.904 qpair failed and we were unable to recover it. 00:35:58.162 [2024-10-28 05:11:48.496119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.162 [2024-10-28 05:11:48.496238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.162 [2024-10-28 05:11:48.496264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.163 [2024-10-28 05:11:48.496286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.163 [2024-10-28 05:11:48.496306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.163 [2024-10-28 05:11:48.496350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.163 qpair failed and we were unable to recover it. 00:35:58.163 [2024-10-28 05:11:48.506094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.163 [2024-10-28 05:11:48.506210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.163 [2024-10-28 05:11:48.506238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.163 [2024-10-28 05:11:48.506254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.163 [2024-10-28 05:11:48.506272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.163 [2024-10-28 05:11:48.506305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.163 qpair failed and we were unable to recover it. 00:35:58.163 [2024-10-28 05:11:48.516104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.163 [2024-10-28 05:11:48.516213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.163 [2024-10-28 05:11:48.516238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.163 [2024-10-28 05:11:48.516252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.163 [2024-10-28 05:11:48.516266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.163 [2024-10-28 05:11:48.516297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.163 qpair failed and we were unable to recover it. 00:35:58.163 [2024-10-28 05:11:48.526100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.163 [2024-10-28 05:11:48.526261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.163 [2024-10-28 05:11:48.526289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.163 [2024-10-28 05:11:48.526304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.163 [2024-10-28 05:11:48.526317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.163 [2024-10-28 05:11:48.526348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.163 qpair failed and we were unable to recover it. 00:35:58.163 [2024-10-28 05:11:48.536105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.163 [2024-10-28 05:11:48.536217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.163 [2024-10-28 05:11:48.536242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.163 [2024-10-28 05:11:48.536256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.163 [2024-10-28 05:11:48.536269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.163 [2024-10-28 05:11:48.536306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.163 qpair failed and we were unable to recover it. 00:35:58.163 [2024-10-28 05:11:48.546126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.163 [2024-10-28 05:11:48.546238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.163 [2024-10-28 05:11:48.546263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.163 [2024-10-28 05:11:48.546278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.163 [2024-10-28 05:11:48.546292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.163 [2024-10-28 05:11:48.546323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.163 qpair failed and we were unable to recover it. 00:35:58.163 [2024-10-28 05:11:48.556119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.163 [2024-10-28 05:11:48.556240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.163 [2024-10-28 05:11:48.556265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.163 [2024-10-28 05:11:48.556279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.163 [2024-10-28 05:11:48.556293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.163 [2024-10-28 05:11:48.556324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.163 qpair failed and we were unable to recover it. 00:35:58.163 [2024-10-28 05:11:48.566137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.163 [2024-10-28 05:11:48.566258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.163 [2024-10-28 05:11:48.566284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.163 [2024-10-28 05:11:48.566299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.163 [2024-10-28 05:11:48.566316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.163 [2024-10-28 05:11:48.566348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.163 qpair failed and we were unable to recover it. 00:35:58.163 [2024-10-28 05:11:48.576206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.163 [2024-10-28 05:11:48.576317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.163 [2024-10-28 05:11:48.576343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.163 [2024-10-28 05:11:48.576358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.163 [2024-10-28 05:11:48.576371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.163 [2024-10-28 05:11:48.576403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.163 qpair failed and we were unable to recover it. 00:35:58.163 [2024-10-28 05:11:48.586154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.163 [2024-10-28 05:11:48.586266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.163 [2024-10-28 05:11:48.586291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.163 [2024-10-28 05:11:48.586307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.163 [2024-10-28 05:11:48.586321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.163 [2024-10-28 05:11:48.586364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.163 qpair failed and we were unable to recover it. 00:35:58.163 [2024-10-28 05:11:48.596128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.163 [2024-10-28 05:11:48.596234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.163 [2024-10-28 05:11:48.596259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.163 [2024-10-28 05:11:48.596273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.163 [2024-10-28 05:11:48.596287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.163 [2024-10-28 05:11:48.596318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.163 qpair failed and we were unable to recover it. 00:35:58.163 [2024-10-28 05:11:48.606129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.163 [2024-10-28 05:11:48.606241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.163 [2024-10-28 05:11:48.606267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.163 [2024-10-28 05:11:48.606283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.163 [2024-10-28 05:11:48.606297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.163 [2024-10-28 05:11:48.606328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.163 qpair failed and we were unable to recover it. 00:35:58.163 [2024-10-28 05:11:48.616161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.163 [2024-10-28 05:11:48.616278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.163 [2024-10-28 05:11:48.616303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.163 [2024-10-28 05:11:48.616318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.163 [2024-10-28 05:11:48.616332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.163 [2024-10-28 05:11:48.616363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.163 qpair failed and we were unable to recover it. 00:35:58.163 [2024-10-28 05:11:48.626158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.163 [2024-10-28 05:11:48.626268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.163 [2024-10-28 05:11:48.626298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.163 [2024-10-28 05:11:48.626314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.164 [2024-10-28 05:11:48.626328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.164 [2024-10-28 05:11:48.626359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.164 qpair failed and we were unable to recover it. 00:35:58.164 [2024-10-28 05:11:48.636152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.164 [2024-10-28 05:11:48.636310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.164 [2024-10-28 05:11:48.636339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.164 [2024-10-28 05:11:48.636354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.164 [2024-10-28 05:11:48.636369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.164 [2024-10-28 05:11:48.636401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.164 qpair failed and we were unable to recover it. 00:35:58.164 [2024-10-28 05:11:48.646190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.164 [2024-10-28 05:11:48.646314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.164 [2024-10-28 05:11:48.646339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.164 [2024-10-28 05:11:48.646355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.164 [2024-10-28 05:11:48.646369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.164 [2024-10-28 05:11:48.646400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.164 qpair failed and we were unable to recover it. 00:35:58.164 [2024-10-28 05:11:48.656168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.164 [2024-10-28 05:11:48.656288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.164 [2024-10-28 05:11:48.656313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.164 [2024-10-28 05:11:48.656328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.164 [2024-10-28 05:11:48.656342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.164 [2024-10-28 05:11:48.656374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.164 qpair failed and we were unable to recover it. 00:35:58.164 [2024-10-28 05:11:48.666171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.164 [2024-10-28 05:11:48.666283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.164 [2024-10-28 05:11:48.666308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.164 [2024-10-28 05:11:48.666323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.164 [2024-10-28 05:11:48.666345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.164 [2024-10-28 05:11:48.666378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.164 qpair failed and we were unable to recover it. 00:35:58.164 [2024-10-28 05:11:48.676181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.164 [2024-10-28 05:11:48.676298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.164 [2024-10-28 05:11:48.676324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.164 [2024-10-28 05:11:48.676339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.164 [2024-10-28 05:11:48.676352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.164 [2024-10-28 05:11:48.676384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.164 qpair failed and we were unable to recover it. 00:35:58.164 [2024-10-28 05:11:48.686149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.164 [2024-10-28 05:11:48.686262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.164 [2024-10-28 05:11:48.686287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.164 [2024-10-28 05:11:48.686301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.164 [2024-10-28 05:11:48.686315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.164 [2024-10-28 05:11:48.686345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.164 qpair failed and we were unable to recover it. 00:35:58.164 [2024-10-28 05:11:48.696266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.164 [2024-10-28 05:11:48.696378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.164 [2024-10-28 05:11:48.696403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.164 [2024-10-28 05:11:48.696418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.164 [2024-10-28 05:11:48.696432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.164 [2024-10-28 05:11:48.696463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.164 qpair failed and we were unable to recover it. 00:35:58.164 [2024-10-28 05:11:48.706187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.164 [2024-10-28 05:11:48.706302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.164 [2024-10-28 05:11:48.706328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.164 [2024-10-28 05:11:48.706343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.164 [2024-10-28 05:11:48.706357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.164 [2024-10-28 05:11:48.706388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.164 qpair failed and we were unable to recover it. 00:35:58.164 [2024-10-28 05:11:48.716256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.164 [2024-10-28 05:11:48.716417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.164 [2024-10-28 05:11:48.716445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.164 [2024-10-28 05:11:48.716461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.164 [2024-10-28 05:11:48.716475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.164 [2024-10-28 05:11:48.716518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.164 qpair failed and we were unable to recover it. 00:35:58.164 [2024-10-28 05:11:48.726196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.164 [2024-10-28 05:11:48.726310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.164 [2024-10-28 05:11:48.726336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.164 [2024-10-28 05:11:48.726351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.164 [2024-10-28 05:11:48.726365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.164 [2024-10-28 05:11:48.726395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.164 qpair failed and we were unable to recover it. 00:35:58.164 [2024-10-28 05:11:48.736200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.164 [2024-10-28 05:11:48.736315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.164 [2024-10-28 05:11:48.736340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.164 [2024-10-28 05:11:48.736355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.164 [2024-10-28 05:11:48.736369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.164 [2024-10-28 05:11:48.736400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.164 qpair failed and we were unable to recover it. 00:35:58.164 [2024-10-28 05:11:48.746192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.164 [2024-10-28 05:11:48.746332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.164 [2024-10-28 05:11:48.746360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.164 [2024-10-28 05:11:48.746374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.164 [2024-10-28 05:11:48.746387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.164 [2024-10-28 05:11:48.746418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.164 qpair failed and we were unable to recover it. 00:35:58.424 [2024-10-28 05:11:48.756201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.424 [2024-10-28 05:11:48.756313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.424 [2024-10-28 05:11:48.756344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.424 [2024-10-28 05:11:48.756360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.424 [2024-10-28 05:11:48.756373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.424 [2024-10-28 05:11:48.756405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.424 qpair failed and we were unable to recover it. 00:35:58.424 [2024-10-28 05:11:48.766235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.424 [2024-10-28 05:11:48.766353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.424 [2024-10-28 05:11:48.766378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.424 [2024-10-28 05:11:48.766393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.424 [2024-10-28 05:11:48.766407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.424 [2024-10-28 05:11:48.766438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.424 qpair failed and we were unable to recover it. 00:35:58.424 [2024-10-28 05:11:48.776214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.424 [2024-10-28 05:11:48.776340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.424 [2024-10-28 05:11:48.776369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.424 [2024-10-28 05:11:48.776384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.424 [2024-10-28 05:11:48.776398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.424 [2024-10-28 05:11:48.776441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.424 qpair failed and we were unable to recover it. 00:35:58.424 [2024-10-28 05:11:48.786219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.424 [2024-10-28 05:11:48.786353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.424 [2024-10-28 05:11:48.786379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.424 [2024-10-28 05:11:48.786394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.424 [2024-10-28 05:11:48.786408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.424 [2024-10-28 05:11:48.786439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.424 qpair failed and we were unable to recover it. 00:35:58.424 [2024-10-28 05:11:48.796243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.424 [2024-10-28 05:11:48.796407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.424 [2024-10-28 05:11:48.796434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.424 [2024-10-28 05:11:48.796449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.424 [2024-10-28 05:11:48.796469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.424 [2024-10-28 05:11:48.796500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.424 qpair failed and we were unable to recover it. 00:35:58.424 [2024-10-28 05:11:48.806231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.424 [2024-10-28 05:11:48.806345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.424 [2024-10-28 05:11:48.806370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.424 [2024-10-28 05:11:48.806384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.424 [2024-10-28 05:11:48.806399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.424 [2024-10-28 05:11:48.806430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.424 qpair failed and we were unable to recover it. 00:35:58.424 [2024-10-28 05:11:48.816215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.424 [2024-10-28 05:11:48.816322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.424 [2024-10-28 05:11:48.816348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.424 [2024-10-28 05:11:48.816363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.424 [2024-10-28 05:11:48.816377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.424 [2024-10-28 05:11:48.816408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.424 qpair failed and we were unable to recover it. 00:35:58.424 [2024-10-28 05:11:48.826204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.424 [2024-10-28 05:11:48.826309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.424 [2024-10-28 05:11:48.826334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.424 [2024-10-28 05:11:48.826350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.424 [2024-10-28 05:11:48.826364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.424 [2024-10-28 05:11:48.826394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.424 qpair failed and we were unable to recover it. 00:35:58.424 [2024-10-28 05:11:48.836258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.424 [2024-10-28 05:11:48.836370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.424 [2024-10-28 05:11:48.836395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.424 [2024-10-28 05:11:48.836410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.424 [2024-10-28 05:11:48.836424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.424 [2024-10-28 05:11:48.836455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.424 qpair failed and we were unable to recover it. 00:35:58.424 [2024-10-28 05:11:48.846222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.424 [2024-10-28 05:11:48.846336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.424 [2024-10-28 05:11:48.846360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.424 [2024-10-28 05:11:48.846375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.424 [2024-10-28 05:11:48.846389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.424 [2024-10-28 05:11:48.846421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.424 qpair failed and we were unable to recover it. 00:35:58.424 [2024-10-28 05:11:48.856266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.424 [2024-10-28 05:11:48.856388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.424 [2024-10-28 05:11:48.856413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.424 [2024-10-28 05:11:48.856428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.424 [2024-10-28 05:11:48.856441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.424 [2024-10-28 05:11:48.856472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.424 qpair failed and we were unable to recover it. 00:35:58.424 [2024-10-28 05:11:48.866240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.424 [2024-10-28 05:11:48.866407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.424 [2024-10-28 05:11:48.866435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.424 [2024-10-28 05:11:48.866451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.425 [2024-10-28 05:11:48.866465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.425 [2024-10-28 05:11:48.866496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.425 qpair failed and we were unable to recover it. 00:35:58.425 [2024-10-28 05:11:48.876224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.425 [2024-10-28 05:11:48.876331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.425 [2024-10-28 05:11:48.876356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.425 [2024-10-28 05:11:48.876371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.425 [2024-10-28 05:11:48.876384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.425 [2024-10-28 05:11:48.876415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.425 qpair failed and we were unable to recover it. 00:35:58.425 [2024-10-28 05:11:48.886274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.425 [2024-10-28 05:11:48.886438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.425 [2024-10-28 05:11:48.886471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.425 [2024-10-28 05:11:48.886487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.425 [2024-10-28 05:11:48.886500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.425 [2024-10-28 05:11:48.886531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.425 qpair failed and we were unable to recover it. 00:35:58.425 [2024-10-28 05:11:48.896239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.425 [2024-10-28 05:11:48.896348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.425 [2024-10-28 05:11:48.896372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.425 [2024-10-28 05:11:48.896386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.425 [2024-10-28 05:11:48.896400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.425 [2024-10-28 05:11:48.896430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.425 qpair failed and we were unable to recover it. 00:35:58.425 [2024-10-28 05:11:48.906244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.425 [2024-10-28 05:11:48.906376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.425 [2024-10-28 05:11:48.906403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.425 [2024-10-28 05:11:48.906419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.425 [2024-10-28 05:11:48.906432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.425 [2024-10-28 05:11:48.906463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.425 qpair failed and we were unable to recover it. 00:35:58.425 [2024-10-28 05:11:48.916280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.425 [2024-10-28 05:11:48.916388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.425 [2024-10-28 05:11:48.916413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.425 [2024-10-28 05:11:48.916427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.425 [2024-10-28 05:11:48.916441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.425 [2024-10-28 05:11:48.916472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.425 qpair failed and we were unable to recover it. 00:35:58.425 [2024-10-28 05:11:48.926264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.425 [2024-10-28 05:11:48.926424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.425 [2024-10-28 05:11:48.926451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.425 [2024-10-28 05:11:48.926472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.425 [2024-10-28 05:11:48.926487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.425 [2024-10-28 05:11:48.926518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.425 qpair failed and we were unable to recover it. 00:35:58.425 [2024-10-28 05:11:48.936297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.425 [2024-10-28 05:11:48.936423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.425 [2024-10-28 05:11:48.936450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.425 [2024-10-28 05:11:48.936465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.425 [2024-10-28 05:11:48.936479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.425 [2024-10-28 05:11:48.936510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.425 qpair failed and we were unable to recover it. 00:35:58.425 [2024-10-28 05:11:48.946263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.425 [2024-10-28 05:11:48.946378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.425 [2024-10-28 05:11:48.946404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.425 [2024-10-28 05:11:48.946419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.425 [2024-10-28 05:11:48.946432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.425 [2024-10-28 05:11:48.946463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.425 qpair failed and we were unable to recover it. 00:35:58.425 [2024-10-28 05:11:48.956333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.425 [2024-10-28 05:11:48.956446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.425 [2024-10-28 05:11:48.956471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.425 [2024-10-28 05:11:48.956487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.425 [2024-10-28 05:11:48.956501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.425 [2024-10-28 05:11:48.956531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.425 qpair failed and we were unable to recover it. 00:35:58.425 [2024-10-28 05:11:48.966307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.425 [2024-10-28 05:11:48.966420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.425 [2024-10-28 05:11:48.966446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.425 [2024-10-28 05:11:48.966460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.425 [2024-10-28 05:11:48.966475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.425 [2024-10-28 05:11:48.966512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.425 qpair failed and we were unable to recover it. 00:35:58.425 [2024-10-28 05:11:48.976281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.425 [2024-10-28 05:11:48.976389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.425 [2024-10-28 05:11:48.976415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.425 [2024-10-28 05:11:48.976430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.425 [2024-10-28 05:11:48.976444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.425 [2024-10-28 05:11:48.976488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.425 qpair failed and we were unable to recover it. 00:35:58.425 [2024-10-28 05:11:48.986340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.425 [2024-10-28 05:11:48.986491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.425 [2024-10-28 05:11:48.986519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.425 [2024-10-28 05:11:48.986534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.425 [2024-10-28 05:11:48.986548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.425 [2024-10-28 05:11:48.986579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.425 qpair failed and we were unable to recover it. 00:35:58.425 [2024-10-28 05:11:48.996293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.425 [2024-10-28 05:11:48.996407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.425 [2024-10-28 05:11:48.996432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.425 [2024-10-28 05:11:48.996447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.425 [2024-10-28 05:11:48.996462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.425 [2024-10-28 05:11:48.996493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.425 qpair failed and we were unable to recover it. 00:35:58.425 [2024-10-28 05:11:49.006314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.425 [2024-10-28 05:11:49.006445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.426 [2024-10-28 05:11:49.006471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.426 [2024-10-28 05:11:49.006486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.426 [2024-10-28 05:11:49.006500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.426 [2024-10-28 05:11:49.006533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.426 qpair failed and we were unable to recover it. 00:35:58.426 [2024-10-28 05:11:49.016318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.426 [2024-10-28 05:11:49.016450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.426 [2024-10-28 05:11:49.016477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.426 [2024-10-28 05:11:49.016493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.426 [2024-10-28 05:11:49.016512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.426 [2024-10-28 05:11:49.016545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.426 qpair failed and we were unable to recover it. 00:35:58.685 [2024-10-28 05:11:49.026343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.685 [2024-10-28 05:11:49.026458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.685 [2024-10-28 05:11:49.026485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.685 [2024-10-28 05:11:49.026500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.685 [2024-10-28 05:11:49.026513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.685 [2024-10-28 05:11:49.026545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.685 qpair failed and we were unable to recover it. 00:35:58.685 [2024-10-28 05:11:49.036356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.685 [2024-10-28 05:11:49.036471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.685 [2024-10-28 05:11:49.036498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.685 [2024-10-28 05:11:49.036514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.685 [2024-10-28 05:11:49.036528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.685 [2024-10-28 05:11:49.036559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.685 qpair failed and we were unable to recover it. 00:35:58.685 [2024-10-28 05:11:49.046336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.685 [2024-10-28 05:11:49.046500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.685 [2024-10-28 05:11:49.046527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.685 [2024-10-28 05:11:49.046543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.685 [2024-10-28 05:11:49.046557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.685 [2024-10-28 05:11:49.046587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.685 qpair failed and we were unable to recover it. 00:35:58.685 [2024-10-28 05:11:49.056334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.685 [2024-10-28 05:11:49.056440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.685 [2024-10-28 05:11:49.056467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.685 [2024-10-28 05:11:49.056490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.685 [2024-10-28 05:11:49.056508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.685 [2024-10-28 05:11:49.056538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.685 qpair failed and we were unable to recover it. 00:35:58.685 [2024-10-28 05:11:49.066319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.685 [2024-10-28 05:11:49.066435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.685 [2024-10-28 05:11:49.066462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.685 [2024-10-28 05:11:49.066478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.685 [2024-10-28 05:11:49.066496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.685 [2024-10-28 05:11:49.066527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.685 qpair failed and we were unable to recover it. 00:35:58.685 [2024-10-28 05:11:49.076331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.685 [2024-10-28 05:11:49.076459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.685 [2024-10-28 05:11:49.076486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.685 [2024-10-28 05:11:49.076500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.685 [2024-10-28 05:11:49.076514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.685 [2024-10-28 05:11:49.076545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.685 qpair failed and we were unable to recover it. 00:35:58.685 [2024-10-28 05:11:49.086334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.685 [2024-10-28 05:11:49.086454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.685 [2024-10-28 05:11:49.086481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.685 [2024-10-28 05:11:49.086496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.685 [2024-10-28 05:11:49.086511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.685 [2024-10-28 05:11:49.086541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.685 qpair failed and we were unable to recover it. 00:35:58.685 [2024-10-28 05:11:49.096320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.685 [2024-10-28 05:11:49.096429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.685 [2024-10-28 05:11:49.096456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.685 [2024-10-28 05:11:49.096471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.685 [2024-10-28 05:11:49.096484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.685 [2024-10-28 05:11:49.096521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.685 qpair failed and we were unable to recover it. 00:35:58.685 [2024-10-28 05:11:49.106346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.685 [2024-10-28 05:11:49.106457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.685 [2024-10-28 05:11:49.106486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.685 [2024-10-28 05:11:49.106502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.685 [2024-10-28 05:11:49.106515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.685 [2024-10-28 05:11:49.106559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.685 qpair failed and we were unable to recover it. 00:35:58.685 [2024-10-28 05:11:49.116346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.685 [2024-10-28 05:11:49.116498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.685 [2024-10-28 05:11:49.116526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.685 [2024-10-28 05:11:49.116541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.685 [2024-10-28 05:11:49.116556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.686 [2024-10-28 05:11:49.116587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.686 qpair failed and we were unable to recover it. 00:35:58.686 [2024-10-28 05:11:49.126472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.686 [2024-10-28 05:11:49.126602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.686 [2024-10-28 05:11:49.126628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.686 [2024-10-28 05:11:49.126656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.686 [2024-10-28 05:11:49.126670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.686 [2024-10-28 05:11:49.126702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.686 qpair failed and we were unable to recover it. 00:35:58.686 [2024-10-28 05:11:49.136366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.686 [2024-10-28 05:11:49.136495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.686 [2024-10-28 05:11:49.136522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.686 [2024-10-28 05:11:49.136537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.686 [2024-10-28 05:11:49.136552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.686 [2024-10-28 05:11:49.136583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.686 qpair failed and we were unable to recover it. 00:35:58.686 [2024-10-28 05:11:49.146349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.686 [2024-10-28 05:11:49.146461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.686 [2024-10-28 05:11:49.146488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.686 [2024-10-28 05:11:49.146503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.686 [2024-10-28 05:11:49.146516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.686 [2024-10-28 05:11:49.146548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.686 qpair failed and we were unable to recover it. 00:35:58.686 [2024-10-28 05:11:49.156347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.686 [2024-10-28 05:11:49.156455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.686 [2024-10-28 05:11:49.156482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.686 [2024-10-28 05:11:49.156497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.686 [2024-10-28 05:11:49.156510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.686 [2024-10-28 05:11:49.156541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.686 qpair failed and we were unable to recover it. 00:35:58.686 [2024-10-28 05:11:49.166393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.686 [2024-10-28 05:11:49.166507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.686 [2024-10-28 05:11:49.166533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.686 [2024-10-28 05:11:49.166549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.686 [2024-10-28 05:11:49.166564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.686 [2024-10-28 05:11:49.166594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.686 qpair failed and we were unable to recover it. 00:35:58.686 [2024-10-28 05:11:49.176410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.686 [2024-10-28 05:11:49.176535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.686 [2024-10-28 05:11:49.176562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.686 [2024-10-28 05:11:49.176577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.686 [2024-10-28 05:11:49.176591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.686 [2024-10-28 05:11:49.176623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.686 qpair failed and we were unable to recover it. 00:35:58.686 [2024-10-28 05:11:49.186387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.686 [2024-10-28 05:11:49.186554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.686 [2024-10-28 05:11:49.186586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.686 [2024-10-28 05:11:49.186603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.686 [2024-10-28 05:11:49.186616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.686 [2024-10-28 05:11:49.186655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.686 qpair failed and we were unable to recover it. 00:35:58.686 [2024-10-28 05:11:49.196387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.686 [2024-10-28 05:11:49.196501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.686 [2024-10-28 05:11:49.196527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.686 [2024-10-28 05:11:49.196543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.686 [2024-10-28 05:11:49.196557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.686 [2024-10-28 05:11:49.196588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.686 qpair failed and we were unable to recover it. 00:35:58.686 [2024-10-28 05:11:49.206380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.686 [2024-10-28 05:11:49.206495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.686 [2024-10-28 05:11:49.206522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.686 [2024-10-28 05:11:49.206537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.686 [2024-10-28 05:11:49.206550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.686 [2024-10-28 05:11:49.206582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.686 qpair failed and we were unable to recover it. 00:35:58.686 [2024-10-28 05:11:49.216395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.686 [2024-10-28 05:11:49.216511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.686 [2024-10-28 05:11:49.216538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.686 [2024-10-28 05:11:49.216554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.686 [2024-10-28 05:11:49.216566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.686 [2024-10-28 05:11:49.216597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.686 qpair failed and we were unable to recover it. 00:35:58.686 [2024-10-28 05:11:49.226412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.686 [2024-10-28 05:11:49.226521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.686 [2024-10-28 05:11:49.226548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.686 [2024-10-28 05:11:49.226563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.686 [2024-10-28 05:11:49.226583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.686 [2024-10-28 05:11:49.226615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.686 qpair failed and we were unable to recover it. 00:35:58.686 [2024-10-28 05:11:49.236394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.686 [2024-10-28 05:11:49.236507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.686 [2024-10-28 05:11:49.236534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.686 [2024-10-28 05:11:49.236549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.686 [2024-10-28 05:11:49.236562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.686 [2024-10-28 05:11:49.236601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.686 qpair failed and we were unable to recover it. 00:35:58.686 [2024-10-28 05:11:49.246516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.686 [2024-10-28 05:11:49.246653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.686 [2024-10-28 05:11:49.246681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.686 [2024-10-28 05:11:49.246696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.686 [2024-10-28 05:11:49.246711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.686 [2024-10-28 05:11:49.246742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.686 qpair failed and we were unable to recover it. 00:35:58.687 [2024-10-28 05:11:49.256394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.687 [2024-10-28 05:11:49.256503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.687 [2024-10-28 05:11:49.256528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.687 [2024-10-28 05:11:49.256543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.687 [2024-10-28 05:11:49.256556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.687 [2024-10-28 05:11:49.256586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.687 qpair failed and we were unable to recover it. 00:35:58.687 [2024-10-28 05:11:49.266467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.687 [2024-10-28 05:11:49.266590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.687 [2024-10-28 05:11:49.266618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.687 [2024-10-28 05:11:49.266642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.687 [2024-10-28 05:11:49.266659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.687 [2024-10-28 05:11:49.266691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.687 qpair failed and we were unable to recover it. 00:35:58.687 [2024-10-28 05:11:49.276413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.687 [2024-10-28 05:11:49.276524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.687 [2024-10-28 05:11:49.276551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.687 [2024-10-28 05:11:49.276566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.687 [2024-10-28 05:11:49.276580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.687 [2024-10-28 05:11:49.276612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.687 qpair failed and we were unable to recover it. 00:35:58.946 [2024-10-28 05:11:49.286554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.946 [2024-10-28 05:11:49.286693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.946 [2024-10-28 05:11:49.286720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.946 [2024-10-28 05:11:49.286735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.946 [2024-10-28 05:11:49.286748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.946 [2024-10-28 05:11:49.286780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.946 qpair failed and we were unable to recover it. 00:35:58.946 [2024-10-28 05:11:49.296453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.946 [2024-10-28 05:11:49.296577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.946 [2024-10-28 05:11:49.296604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.946 [2024-10-28 05:11:49.296619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.946 [2024-10-28 05:11:49.296642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.946 [2024-10-28 05:11:49.296678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.946 qpair failed and we were unable to recover it. 00:35:58.946 [2024-10-28 05:11:49.306436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.946 [2024-10-28 05:11:49.306547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.946 [2024-10-28 05:11:49.306574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.946 [2024-10-28 05:11:49.306589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.946 [2024-10-28 05:11:49.306603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.946 [2024-10-28 05:11:49.306642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.946 qpair failed and we were unable to recover it. 00:35:58.946 [2024-10-28 05:11:49.316454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.946 [2024-10-28 05:11:49.316589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.946 [2024-10-28 05:11:49.316622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.946 [2024-10-28 05:11:49.316646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.946 [2024-10-28 05:11:49.316661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.946 [2024-10-28 05:11:49.316693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.946 qpair failed and we were unable to recover it. 00:35:58.946 [2024-10-28 05:11:49.326483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.946 [2024-10-28 05:11:49.326654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.946 [2024-10-28 05:11:49.326682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.946 [2024-10-28 05:11:49.326697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.946 [2024-10-28 05:11:49.326711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.946 [2024-10-28 05:11:49.326743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.946 qpair failed and we were unable to recover it. 00:35:58.946 [2024-10-28 05:11:49.336453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.946 [2024-10-28 05:11:49.336566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.946 [2024-10-28 05:11:49.336593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.946 [2024-10-28 05:11:49.336608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.946 [2024-10-28 05:11:49.336630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.946 [2024-10-28 05:11:49.336670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.946 qpair failed and we were unable to recover it. 00:35:58.946 [2024-10-28 05:11:49.346474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.946 [2024-10-28 05:11:49.346594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.946 [2024-10-28 05:11:49.346621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.946 [2024-10-28 05:11:49.346654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.946 [2024-10-28 05:11:49.346672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.946 [2024-10-28 05:11:49.346703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.946 qpair failed and we were unable to recover it. 00:35:58.946 [2024-10-28 05:11:49.356462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.946 [2024-10-28 05:11:49.356583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.946 [2024-10-28 05:11:49.356611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.946 [2024-10-28 05:11:49.356644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.946 [2024-10-28 05:11:49.356670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.946 [2024-10-28 05:11:49.356703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.946 qpair failed and we were unable to recover it. 00:35:58.946 [2024-10-28 05:11:49.366465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.946 [2024-10-28 05:11:49.366593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.946 [2024-10-28 05:11:49.366619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.947 [2024-10-28 05:11:49.366651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.947 [2024-10-28 05:11:49.366668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.947 [2024-10-28 05:11:49.366699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.947 qpair failed and we were unable to recover it. 00:35:58.947 [2024-10-28 05:11:49.376448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.947 [2024-10-28 05:11:49.376574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.947 [2024-10-28 05:11:49.376601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.947 [2024-10-28 05:11:49.376616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.947 [2024-10-28 05:11:49.376629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.947 [2024-10-28 05:11:49.376669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.947 qpair failed and we were unable to recover it. 00:35:58.947 [2024-10-28 05:11:49.386447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.947 [2024-10-28 05:11:49.386563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.947 [2024-10-28 05:11:49.386590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.947 [2024-10-28 05:11:49.386605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.947 [2024-10-28 05:11:49.386619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.947 [2024-10-28 05:11:49.386660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.947 qpair failed and we were unable to recover it. 00:35:58.947 [2024-10-28 05:11:49.396459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.947 [2024-10-28 05:11:49.396577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.947 [2024-10-28 05:11:49.396604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.947 [2024-10-28 05:11:49.396619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.947 [2024-10-28 05:11:49.396641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.947 [2024-10-28 05:11:49.396674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.947 qpair failed and we were unable to recover it. 00:35:58.947 [2024-10-28 05:11:49.406522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.947 [2024-10-28 05:11:49.406661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.947 [2024-10-28 05:11:49.406690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.947 [2024-10-28 05:11:49.406706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.947 [2024-10-28 05:11:49.406719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.947 [2024-10-28 05:11:49.406749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.947 qpair failed and we were unable to recover it. 00:35:58.947 [2024-10-28 05:11:49.416575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.947 [2024-10-28 05:11:49.416690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.947 [2024-10-28 05:11:49.416715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.947 [2024-10-28 05:11:49.416730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.947 [2024-10-28 05:11:49.416743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.947 [2024-10-28 05:11:49.416775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.947 qpair failed and we were unable to recover it. 00:35:58.947 [2024-10-28 05:11:49.426483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.947 [2024-10-28 05:11:49.426614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.947 [2024-10-28 05:11:49.426652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.947 [2024-10-28 05:11:49.426669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.947 [2024-10-28 05:11:49.426684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.947 [2024-10-28 05:11:49.426714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.947 qpair failed and we were unable to recover it. 00:35:58.947 [2024-10-28 05:11:49.436479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.947 [2024-10-28 05:11:49.436615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.947 [2024-10-28 05:11:49.436650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.947 [2024-10-28 05:11:49.436667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.947 [2024-10-28 05:11:49.436682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.947 [2024-10-28 05:11:49.436713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.947 qpair failed and we were unable to recover it. 00:35:58.947 [2024-10-28 05:11:49.446572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.947 [2024-10-28 05:11:49.446710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.947 [2024-10-28 05:11:49.446741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.947 [2024-10-28 05:11:49.446756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.947 [2024-10-28 05:11:49.446770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.947 [2024-10-28 05:11:49.446801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.947 qpair failed and we were unable to recover it. 00:35:58.947 [2024-10-28 05:11:49.456518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.947 [2024-10-28 05:11:49.456642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.947 [2024-10-28 05:11:49.456670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.947 [2024-10-28 05:11:49.456685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.947 [2024-10-28 05:11:49.456698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.947 [2024-10-28 05:11:49.456742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.947 qpair failed and we were unable to recover it. 00:35:58.947 [2024-10-28 05:11:49.466522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.947 [2024-10-28 05:11:49.466642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.947 [2024-10-28 05:11:49.466670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.947 [2024-10-28 05:11:49.466685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.947 [2024-10-28 05:11:49.466699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.947 [2024-10-28 05:11:49.466742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.947 qpair failed and we were unable to recover it. 00:35:58.947 [2024-10-28 05:11:49.476507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.947 [2024-10-28 05:11:49.476661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.947 [2024-10-28 05:11:49.476691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.947 [2024-10-28 05:11:49.476707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.947 [2024-10-28 05:11:49.476722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.947 [2024-10-28 05:11:49.476752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.947 qpair failed and we were unable to recover it. 00:35:58.947 [2024-10-28 05:11:49.486517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.947 [2024-10-28 05:11:49.486644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.947 [2024-10-28 05:11:49.486671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.947 [2024-10-28 05:11:49.486693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.947 [2024-10-28 05:11:49.486707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.947 [2024-10-28 05:11:49.486740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.947 qpair failed and we were unable to recover it. 00:35:58.947 [2024-10-28 05:11:49.496607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.947 [2024-10-28 05:11:49.496727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.947 [2024-10-28 05:11:49.496754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.947 [2024-10-28 05:11:49.496769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.947 [2024-10-28 05:11:49.496783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.947 [2024-10-28 05:11:49.496815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.947 qpair failed and we were unable to recover it. 00:35:58.947 [2024-10-28 05:11:49.506536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.947 [2024-10-28 05:11:49.506655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.948 [2024-10-28 05:11:49.506682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.948 [2024-10-28 05:11:49.506700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.948 [2024-10-28 05:11:49.506716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.948 [2024-10-28 05:11:49.506761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.948 qpair failed and we were unable to recover it. 00:35:58.948 [2024-10-28 05:11:49.516540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.948 [2024-10-28 05:11:49.516666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.948 [2024-10-28 05:11:49.516693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.948 [2024-10-28 05:11:49.516708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.948 [2024-10-28 05:11:49.516722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.948 [2024-10-28 05:11:49.516753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.948 qpair failed and we were unable to recover it. 00:35:58.948 [2024-10-28 05:11:49.526600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.948 [2024-10-28 05:11:49.526757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.948 [2024-10-28 05:11:49.526784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.948 [2024-10-28 05:11:49.526798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.948 [2024-10-28 05:11:49.526813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.948 [2024-10-28 05:11:49.526851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.948 qpair failed and we were unable to recover it. 00:35:58.948 [2024-10-28 05:11:49.536590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.948 [2024-10-28 05:11:49.536736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.948 [2024-10-28 05:11:49.536765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.948 [2024-10-28 05:11:49.536781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.948 [2024-10-28 05:11:49.536795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:58.948 [2024-10-28 05:11:49.536827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.948 qpair failed and we were unable to recover it. 00:35:59.207 [2024-10-28 05:11:49.546574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.207 [2024-10-28 05:11:49.546692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.207 [2024-10-28 05:11:49.546718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.207 [2024-10-28 05:11:49.546732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.207 [2024-10-28 05:11:49.546747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.207 [2024-10-28 05:11:49.546778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.207 qpair failed and we were unable to recover it. 00:35:59.207 [2024-10-28 05:11:49.556566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.207 [2024-10-28 05:11:49.556690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.207 [2024-10-28 05:11:49.556717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.207 [2024-10-28 05:11:49.556732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.207 [2024-10-28 05:11:49.556747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.207 [2024-10-28 05:11:49.556779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.207 qpair failed and we were unable to recover it. 00:35:59.207 [2024-10-28 05:11:49.566593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.207 [2024-10-28 05:11:49.566728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.207 [2024-10-28 05:11:49.566755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.207 [2024-10-28 05:11:49.566770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.207 [2024-10-28 05:11:49.566785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.207 [2024-10-28 05:11:49.566815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.207 qpair failed and we were unable to recover it. 00:35:59.207 [2024-10-28 05:11:49.576567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.207 [2024-10-28 05:11:49.576690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.207 [2024-10-28 05:11:49.576717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.207 [2024-10-28 05:11:49.576732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.207 [2024-10-28 05:11:49.576746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.207 [2024-10-28 05:11:49.576778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.207 qpair failed and we were unable to recover it. 00:35:59.207 [2024-10-28 05:11:49.586557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.207 [2024-10-28 05:11:49.586682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.207 [2024-10-28 05:11:49.586708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.207 [2024-10-28 05:11:49.586724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.207 [2024-10-28 05:11:49.586738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.207 [2024-10-28 05:11:49.586769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.207 qpair failed and we were unable to recover it. 00:35:59.207 [2024-10-28 05:11:49.596555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.207 [2024-10-28 05:11:49.596676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.207 [2024-10-28 05:11:49.596703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.207 [2024-10-28 05:11:49.596718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.207 [2024-10-28 05:11:49.596732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.207 [2024-10-28 05:11:49.596763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.207 qpair failed and we were unable to recover it. 00:35:59.207 [2024-10-28 05:11:49.606579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.207 [2024-10-28 05:11:49.606713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.207 [2024-10-28 05:11:49.606741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.207 [2024-10-28 05:11:49.606756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.207 [2024-10-28 05:11:49.606770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.207 [2024-10-28 05:11:49.606815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.207 qpair failed and we were unable to recover it. 00:35:59.207 [2024-10-28 05:11:49.616659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.207 [2024-10-28 05:11:49.616773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.207 [2024-10-28 05:11:49.616799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.207 [2024-10-28 05:11:49.616821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.207 [2024-10-28 05:11:49.616836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.207 [2024-10-28 05:11:49.616868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.207 qpair failed and we were unable to recover it. 00:35:59.207 [2024-10-28 05:11:49.626551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.207 [2024-10-28 05:11:49.626666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.207 [2024-10-28 05:11:49.626694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.207 [2024-10-28 05:11:49.626709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.207 [2024-10-28 05:11:49.626723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.207 [2024-10-28 05:11:49.626755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.207 qpair failed and we were unable to recover it. 00:35:59.207 [2024-10-28 05:11:49.636557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.207 [2024-10-28 05:11:49.636673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.207 [2024-10-28 05:11:49.636701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.207 [2024-10-28 05:11:49.636716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.207 [2024-10-28 05:11:49.636730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.207 [2024-10-28 05:11:49.636762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.207 qpair failed and we were unable to recover it. 00:35:59.207 [2024-10-28 05:11:49.646609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.208 [2024-10-28 05:11:49.646733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.208 [2024-10-28 05:11:49.646760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.208 [2024-10-28 05:11:49.646775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.208 [2024-10-28 05:11:49.646789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.208 [2024-10-28 05:11:49.646820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.208 qpair failed and we were unable to recover it. 00:35:59.208 [2024-10-28 05:11:49.656591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.208 [2024-10-28 05:11:49.656722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.208 [2024-10-28 05:11:49.656750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.208 [2024-10-28 05:11:49.656765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.208 [2024-10-28 05:11:49.656779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.208 [2024-10-28 05:11:49.656829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.208 qpair failed and we were unable to recover it. 00:35:59.208 [2024-10-28 05:11:49.666588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.208 [2024-10-28 05:11:49.666722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.208 [2024-10-28 05:11:49.666750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.208 [2024-10-28 05:11:49.666768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.208 [2024-10-28 05:11:49.666783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.208 [2024-10-28 05:11:49.666826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.208 qpair failed and we were unable to recover it. 00:35:59.208 [2024-10-28 05:11:49.676594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.208 [2024-10-28 05:11:49.676736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.208 [2024-10-28 05:11:49.676763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.208 [2024-10-28 05:11:49.676779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.208 [2024-10-28 05:11:49.676794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.208 [2024-10-28 05:11:49.676825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.208 qpair failed and we were unable to recover it. 00:35:59.208 [2024-10-28 05:11:49.686700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.208 [2024-10-28 05:11:49.686818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.208 [2024-10-28 05:11:49.686845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.208 [2024-10-28 05:11:49.686860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.208 [2024-10-28 05:11:49.686875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.208 [2024-10-28 05:11:49.686906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.208 qpair failed and we were unable to recover it. 00:35:59.208 [2024-10-28 05:11:49.696628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.208 [2024-10-28 05:11:49.696753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.208 [2024-10-28 05:11:49.696779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.208 [2024-10-28 05:11:49.696795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.208 [2024-10-28 05:11:49.696809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.208 [2024-10-28 05:11:49.696840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.208 qpair failed and we were unable to recover it. 00:35:59.208 [2024-10-28 05:11:49.706621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.208 [2024-10-28 05:11:49.706750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.208 [2024-10-28 05:11:49.706778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.208 [2024-10-28 05:11:49.706793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.208 [2024-10-28 05:11:49.706807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.208 [2024-10-28 05:11:49.706840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.208 qpair failed and we were unable to recover it. 00:35:59.208 [2024-10-28 05:11:49.716696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.208 [2024-10-28 05:11:49.716810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.208 [2024-10-28 05:11:49.716837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.208 [2024-10-28 05:11:49.716852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.208 [2024-10-28 05:11:49.716867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.208 [2024-10-28 05:11:49.716898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.208 qpair failed and we were unable to recover it. 00:35:59.208 [2024-10-28 05:11:49.726651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.208 [2024-10-28 05:11:49.726778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.208 [2024-10-28 05:11:49.726805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.208 [2024-10-28 05:11:49.726821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.208 [2024-10-28 05:11:49.726836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.208 [2024-10-28 05:11:49.726867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.208 qpair failed and we were unable to recover it. 00:35:59.208 [2024-10-28 05:11:49.736614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.208 [2024-10-28 05:11:49.736737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.208 [2024-10-28 05:11:49.736764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.208 [2024-10-28 05:11:49.736779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.208 [2024-10-28 05:11:49.736794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.208 [2024-10-28 05:11:49.736825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.208 qpair failed and we were unable to recover it. 00:35:59.208 [2024-10-28 05:11:49.746770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.208 [2024-10-28 05:11:49.746891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.208 [2024-10-28 05:11:49.746923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.208 [2024-10-28 05:11:49.746946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.208 [2024-10-28 05:11:49.746960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.208 [2024-10-28 05:11:49.746992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.208 qpair failed and we were unable to recover it. 00:35:59.208 [2024-10-28 05:11:49.756621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.208 [2024-10-28 05:11:49.756752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.208 [2024-10-28 05:11:49.756779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.208 [2024-10-28 05:11:49.756794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.208 [2024-10-28 05:11:49.756808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.208 [2024-10-28 05:11:49.756839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.208 qpair failed and we were unable to recover it. 00:35:59.208 [2024-10-28 05:11:49.766680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.208 [2024-10-28 05:11:49.766808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.208 [2024-10-28 05:11:49.766834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.208 [2024-10-28 05:11:49.766850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.208 [2024-10-28 05:11:49.766864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.208 [2024-10-28 05:11:49.766895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.208 qpair failed and we were unable to recover it. 00:35:59.208 [2024-10-28 05:11:49.776676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.208 [2024-10-28 05:11:49.776796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.208 [2024-10-28 05:11:49.776823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.208 [2024-10-28 05:11:49.776838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.208 [2024-10-28 05:11:49.776852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.208 [2024-10-28 05:11:49.776896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.208 qpair failed and we were unable to recover it. 00:35:59.209 [2024-10-28 05:11:49.786641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.209 [2024-10-28 05:11:49.786755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.209 [2024-10-28 05:11:49.786782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.209 [2024-10-28 05:11:49.786796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.209 [2024-10-28 05:11:49.786816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.209 [2024-10-28 05:11:49.786849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.209 qpair failed and we were unable to recover it. 00:35:59.209 [2024-10-28 05:11:49.796684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.209 [2024-10-28 05:11:49.796799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.209 [2024-10-28 05:11:49.796825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.209 [2024-10-28 05:11:49.796840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.209 [2024-10-28 05:11:49.796854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.209 [2024-10-28 05:11:49.796885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.209 qpair failed and we were unable to recover it. 00:35:59.467 [2024-10-28 05:11:49.806666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.467 [2024-10-28 05:11:49.806792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.467 [2024-10-28 05:11:49.806818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.467 [2024-10-28 05:11:49.806833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.467 [2024-10-28 05:11:49.806848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.467 [2024-10-28 05:11:49.806878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.467 qpair failed and we were unable to recover it. 00:35:59.467 [2024-10-28 05:11:49.816684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.467 [2024-10-28 05:11:49.816821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.467 [2024-10-28 05:11:49.816849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.467 [2024-10-28 05:11:49.816865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.467 [2024-10-28 05:11:49.816884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.467 [2024-10-28 05:11:49.816932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.467 qpair failed and we were unable to recover it. 00:35:59.467 [2024-10-28 05:11:49.826698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.467 [2024-10-28 05:11:49.826819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.467 [2024-10-28 05:11:49.826847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.467 [2024-10-28 05:11:49.826864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.467 [2024-10-28 05:11:49.826880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.467 [2024-10-28 05:11:49.826911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.467 qpair failed and we were unable to recover it. 00:35:59.467 [2024-10-28 05:11:49.836689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.467 [2024-10-28 05:11:49.836810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.467 [2024-10-28 05:11:49.836838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.467 [2024-10-28 05:11:49.836853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.467 [2024-10-28 05:11:49.836868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.467 [2024-10-28 05:11:49.836912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.467 qpair failed and we were unable to recover it. 00:35:59.468 [2024-10-28 05:11:49.846711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.468 [2024-10-28 05:11:49.846833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.468 [2024-10-28 05:11:49.846860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.468 [2024-10-28 05:11:49.846875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.468 [2024-10-28 05:11:49.846889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.468 [2024-10-28 05:11:49.846922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.468 qpair failed and we were unable to recover it. 00:35:59.468 [2024-10-28 05:11:49.856697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.468 [2024-10-28 05:11:49.856813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.468 [2024-10-28 05:11:49.856839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.468 [2024-10-28 05:11:49.856854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.468 [2024-10-28 05:11:49.856869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.468 [2024-10-28 05:11:49.856900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.468 qpair failed and we were unable to recover it. 00:35:59.468 [2024-10-28 05:11:49.866684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.468 [2024-10-28 05:11:49.866803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.468 [2024-10-28 05:11:49.866829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.468 [2024-10-28 05:11:49.866845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.468 [2024-10-28 05:11:49.866859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.468 [2024-10-28 05:11:49.866891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.468 qpair failed and we were unable to recover it. 00:35:59.468 [2024-10-28 05:11:49.876766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.468 [2024-10-28 05:11:49.876879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.468 [2024-10-28 05:11:49.876926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.468 [2024-10-28 05:11:49.876942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.468 [2024-10-28 05:11:49.876957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.468 [2024-10-28 05:11:49.877001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.468 qpair failed and we were unable to recover it. 00:35:59.468 [2024-10-28 05:11:49.886707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.468 [2024-10-28 05:11:49.886829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.468 [2024-10-28 05:11:49.886856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.468 [2024-10-28 05:11:49.886871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.468 [2024-10-28 05:11:49.886886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.468 [2024-10-28 05:11:49.886917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.468 qpair failed and we were unable to recover it. 00:35:59.468 [2024-10-28 05:11:49.896712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.468 [2024-10-28 05:11:49.896878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.468 [2024-10-28 05:11:49.896905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.468 [2024-10-28 05:11:49.896920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.468 [2024-10-28 05:11:49.896934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.468 [2024-10-28 05:11:49.896965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.468 qpair failed and we were unable to recover it. 00:35:59.468 [2024-10-28 05:11:49.906701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.468 [2024-10-28 05:11:49.906828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.468 [2024-10-28 05:11:49.906855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.468 [2024-10-28 05:11:49.906870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.468 [2024-10-28 05:11:49.906884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.468 [2024-10-28 05:11:49.906917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.468 qpair failed and we were unable to recover it. 00:35:59.468 [2024-10-28 05:11:49.916729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.468 [2024-10-28 05:11:49.916851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.468 [2024-10-28 05:11:49.916877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.468 [2024-10-28 05:11:49.916892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.468 [2024-10-28 05:11:49.916912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.468 [2024-10-28 05:11:49.916945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.468 qpair failed and we were unable to recover it. 00:35:59.468 [2024-10-28 05:11:49.926709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.468 [2024-10-28 05:11:49.926824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.468 [2024-10-28 05:11:49.926851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.468 [2024-10-28 05:11:49.926866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.468 [2024-10-28 05:11:49.926880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.468 [2024-10-28 05:11:49.926911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.468 qpair failed and we were unable to recover it. 00:35:59.468 [2024-10-28 05:11:49.936720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.468 [2024-10-28 05:11:49.936843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.468 [2024-10-28 05:11:49.936869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.468 [2024-10-28 05:11:49.936884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.468 [2024-10-28 05:11:49.936899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.468 [2024-10-28 05:11:49.936929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.468 qpair failed and we were unable to recover it. 00:35:59.468 [2024-10-28 05:11:49.946723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.468 [2024-10-28 05:11:49.946838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.468 [2024-10-28 05:11:49.946865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.468 [2024-10-28 05:11:49.946879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.468 [2024-10-28 05:11:49.946894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.468 [2024-10-28 05:11:49.946925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.468 qpair failed and we were unable to recover it. 00:35:59.468 [2024-10-28 05:11:49.956720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.468 [2024-10-28 05:11:49.956834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.468 [2024-10-28 05:11:49.956860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.468 [2024-10-28 05:11:49.956876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.468 [2024-10-28 05:11:49.956890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.468 [2024-10-28 05:11:49.956921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.468 qpair failed and we were unable to recover it. 00:35:59.468 [2024-10-28 05:11:49.966706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.468 [2024-10-28 05:11:49.966824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.468 [2024-10-28 05:11:49.966850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.468 [2024-10-28 05:11:49.966865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.468 [2024-10-28 05:11:49.966879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.468 [2024-10-28 05:11:49.966910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.468 qpair failed and we were unable to recover it. 00:35:59.468 [2024-10-28 05:11:49.976705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.468 [2024-10-28 05:11:49.976816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.468 [2024-10-28 05:11:49.976843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.468 [2024-10-28 05:11:49.976858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.469 [2024-10-28 05:11:49.976872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.469 [2024-10-28 05:11:49.976903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.469 qpair failed and we were unable to recover it. 00:35:59.469 [2024-10-28 05:11:49.986724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.469 [2024-10-28 05:11:49.986841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.469 [2024-10-28 05:11:49.986868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.469 [2024-10-28 05:11:49.986883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.469 [2024-10-28 05:11:49.986898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.469 [2024-10-28 05:11:49.986930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.469 qpair failed and we were unable to recover it. 00:35:59.469 [2024-10-28 05:11:49.996785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.469 [2024-10-28 05:11:49.996905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.469 [2024-10-28 05:11:49.996931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.469 [2024-10-28 05:11:49.996946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.469 [2024-10-28 05:11:49.996961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.469 [2024-10-28 05:11:49.996992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.469 qpair failed and we were unable to recover it. 00:35:59.469 [2024-10-28 05:11:50.006754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.469 [2024-10-28 05:11:50.006872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.469 [2024-10-28 05:11:50.006905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.469 [2024-10-28 05:11:50.006930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.469 [2024-10-28 05:11:50.006945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.469 [2024-10-28 05:11:50.006978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.469 qpair failed and we were unable to recover it. 00:35:59.469 [2024-10-28 05:11:50.016779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.469 [2024-10-28 05:11:50.016912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.469 [2024-10-28 05:11:50.016942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.469 [2024-10-28 05:11:50.016958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.469 [2024-10-28 05:11:50.016973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.469 [2024-10-28 05:11:50.017005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.469 qpair failed and we were unable to recover it. 00:35:59.469 [2024-10-28 05:11:50.026787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.469 [2024-10-28 05:11:50.026905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.469 [2024-10-28 05:11:50.026935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.469 [2024-10-28 05:11:50.026950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.469 [2024-10-28 05:11:50.026965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.469 [2024-10-28 05:11:50.026996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.469 qpair failed and we were unable to recover it. 00:35:59.469 [2024-10-28 05:11:50.036872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.469 [2024-10-28 05:11:50.036985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.469 [2024-10-28 05:11:50.037015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.469 [2024-10-28 05:11:50.037031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.469 [2024-10-28 05:11:50.037046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.469 [2024-10-28 05:11:50.037079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.469 qpair failed and we were unable to recover it. 00:35:59.469 [2024-10-28 05:11:50.046809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.469 [2024-10-28 05:11:50.046976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.469 [2024-10-28 05:11:50.047003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.469 [2024-10-28 05:11:50.047029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.469 [2024-10-28 05:11:50.047045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.469 [2024-10-28 05:11:50.047089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.469 qpair failed and we were unable to recover it. 00:35:59.469 [2024-10-28 05:11:50.056811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.469 [2024-10-28 05:11:50.056929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.469 [2024-10-28 05:11:50.056955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.469 [2024-10-28 05:11:50.056972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.469 [2024-10-28 05:11:50.056987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.469 [2024-10-28 05:11:50.057032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.469 qpair failed and we were unable to recover it. 00:35:59.729 [2024-10-28 05:11:50.066759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.729 [2024-10-28 05:11:50.066878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.729 [2024-10-28 05:11:50.066905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.729 [2024-10-28 05:11:50.066919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.729 [2024-10-28 05:11:50.066934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.729 [2024-10-28 05:11:50.066966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.729 qpair failed and we were unable to recover it. 00:35:59.729 [2024-10-28 05:11:50.076805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.729 [2024-10-28 05:11:50.076929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.729 [2024-10-28 05:11:50.076955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.729 [2024-10-28 05:11:50.076971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.729 [2024-10-28 05:11:50.076986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.729 [2024-10-28 05:11:50.077020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.729 qpair failed and we were unable to recover it. 00:35:59.729 [2024-10-28 05:11:50.086784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.729 [2024-10-28 05:11:50.086936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.729 [2024-10-28 05:11:50.086963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.729 [2024-10-28 05:11:50.086978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.729 [2024-10-28 05:11:50.086992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.729 [2024-10-28 05:11:50.087029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.729 qpair failed and we were unable to recover it. 00:35:59.729 [2024-10-28 05:11:50.096780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.729 [2024-10-28 05:11:50.096910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.729 [2024-10-28 05:11:50.096937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.729 [2024-10-28 05:11:50.096952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.729 [2024-10-28 05:11:50.096966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.729 [2024-10-28 05:11:50.096998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.729 qpair failed and we were unable to recover it. 00:35:59.729 [2024-10-28 05:11:50.106787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.729 [2024-10-28 05:11:50.106909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.729 [2024-10-28 05:11:50.106936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.729 [2024-10-28 05:11:50.106951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.729 [2024-10-28 05:11:50.106965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.729 [2024-10-28 05:11:50.106996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.729 qpair failed and we were unable to recover it. 00:35:59.729 [2024-10-28 05:11:50.116851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.729 [2024-10-28 05:11:50.116961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.729 [2024-10-28 05:11:50.116987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.729 [2024-10-28 05:11:50.117003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.729 [2024-10-28 05:11:50.117017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.729 [2024-10-28 05:11:50.117049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.729 qpair failed and we were unable to recover it. 00:35:59.729 [2024-10-28 05:11:50.126773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.729 [2024-10-28 05:11:50.126889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.729 [2024-10-28 05:11:50.126915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.729 [2024-10-28 05:11:50.126931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.729 [2024-10-28 05:11:50.126945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.729 [2024-10-28 05:11:50.126976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.729 qpair failed and we were unable to recover it. 00:35:59.729 [2024-10-28 05:11:50.136816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.729 [2024-10-28 05:11:50.136957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.729 [2024-10-28 05:11:50.136986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.729 [2024-10-28 05:11:50.137006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.729 [2024-10-28 05:11:50.137022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.729 [2024-10-28 05:11:50.137069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.729 qpair failed and we were unable to recover it. 00:35:59.729 [2024-10-28 05:11:50.146787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.729 [2024-10-28 05:11:50.146901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.729 [2024-10-28 05:11:50.146928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.729 [2024-10-28 05:11:50.146943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.729 [2024-10-28 05:11:50.146957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.729 [2024-10-28 05:11:50.146990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.729 qpair failed and we were unable to recover it. 00:35:59.729 [2024-10-28 05:11:50.156787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.729 [2024-10-28 05:11:50.156897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.729 [2024-10-28 05:11:50.156924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.729 [2024-10-28 05:11:50.156938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.729 [2024-10-28 05:11:50.156952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.729 [2024-10-28 05:11:50.156983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.729 qpair failed and we were unable to recover it. 00:35:59.729 [2024-10-28 05:11:50.166852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.729 [2024-10-28 05:11:50.166990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.729 [2024-10-28 05:11:50.167018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.729 [2024-10-28 05:11:50.167035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.729 [2024-10-28 05:11:50.167052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.729 [2024-10-28 05:11:50.167084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.729 qpair failed and we were unable to recover it. 00:35:59.729 [2024-10-28 05:11:50.176832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.729 [2024-10-28 05:11:50.176955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.730 [2024-10-28 05:11:50.176982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.730 [2024-10-28 05:11:50.177003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.730 [2024-10-28 05:11:50.177018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.730 [2024-10-28 05:11:50.177050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.730 qpair failed and we were unable to recover it. 00:35:59.730 [2024-10-28 05:11:50.186831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.730 [2024-10-28 05:11:50.186949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.730 [2024-10-28 05:11:50.186976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.730 [2024-10-28 05:11:50.186991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.730 [2024-10-28 05:11:50.187005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.730 [2024-10-28 05:11:50.187035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.730 qpair failed and we were unable to recover it. 00:35:59.730 [2024-10-28 05:11:50.196831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.730 [2024-10-28 05:11:50.196950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.730 [2024-10-28 05:11:50.196984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.730 [2024-10-28 05:11:50.196999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.730 [2024-10-28 05:11:50.197014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.730 [2024-10-28 05:11:50.197044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.730 qpair failed and we were unable to recover it. 00:35:59.730 [2024-10-28 05:11:50.206829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.730 [2024-10-28 05:11:50.206956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.730 [2024-10-28 05:11:50.206986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.730 [2024-10-28 05:11:50.207002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.730 [2024-10-28 05:11:50.207017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.730 [2024-10-28 05:11:50.207049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.730 qpair failed and we were unable to recover it. 00:35:59.730 [2024-10-28 05:11:50.216846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.730 [2024-10-28 05:11:50.216957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.730 [2024-10-28 05:11:50.216985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.730 [2024-10-28 05:11:50.217000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.730 [2024-10-28 05:11:50.217012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.730 [2024-10-28 05:11:50.217050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.730 qpair failed and we were unable to recover it. 00:35:59.730 [2024-10-28 05:11:50.226830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.730 [2024-10-28 05:11:50.226937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.730 [2024-10-28 05:11:50.226965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.730 [2024-10-28 05:11:50.226980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.730 [2024-10-28 05:11:50.226993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.730 [2024-10-28 05:11:50.227024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.730 qpair failed and we were unable to recover it. 00:35:59.730 [2024-10-28 05:11:50.236810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.730 [2024-10-28 05:11:50.236918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.730 [2024-10-28 05:11:50.236951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.730 [2024-10-28 05:11:50.236966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.730 [2024-10-28 05:11:50.236980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.730 [2024-10-28 05:11:50.237010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.730 qpair failed and we were unable to recover it. 00:35:59.730 [2024-10-28 05:11:50.246858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.730 [2024-10-28 05:11:50.246977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.730 [2024-10-28 05:11:50.247004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.730 [2024-10-28 05:11:50.247018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.730 [2024-10-28 05:11:50.247032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.730 [2024-10-28 05:11:50.247063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.730 qpair failed and we were unable to recover it. 00:35:59.730 [2024-10-28 05:11:50.256824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.730 [2024-10-28 05:11:50.256936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.730 [2024-10-28 05:11:50.256961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.730 [2024-10-28 05:11:50.256976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.730 [2024-10-28 05:11:50.256989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.730 [2024-10-28 05:11:50.257018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.730 qpair failed and we were unable to recover it. 00:35:59.730 [2024-10-28 05:11:50.266861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.730 [2024-10-28 05:11:50.267017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.730 [2024-10-28 05:11:50.267044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.730 [2024-10-28 05:11:50.267059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.730 [2024-10-28 05:11:50.267074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.730 [2024-10-28 05:11:50.267105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.730 qpair failed and we were unable to recover it. 00:35:59.730 [2024-10-28 05:11:50.276915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.730 [2024-10-28 05:11:50.277026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.730 [2024-10-28 05:11:50.277053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.730 [2024-10-28 05:11:50.277068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.730 [2024-10-28 05:11:50.277082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.730 [2024-10-28 05:11:50.277112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.730 qpair failed and we were unable to recover it. 00:35:59.730 [2024-10-28 05:11:50.286877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.730 [2024-10-28 05:11:50.287025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.730 [2024-10-28 05:11:50.287053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.730 [2024-10-28 05:11:50.287067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.730 [2024-10-28 05:11:50.287081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.730 [2024-10-28 05:11:50.287111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.730 qpair failed and we were unable to recover it. 00:35:59.730 [2024-10-28 05:11:50.296886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.730 [2024-10-28 05:11:50.297007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.730 [2024-10-28 05:11:50.297036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.730 [2024-10-28 05:11:50.297054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.730 [2024-10-28 05:11:50.297069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.730 [2024-10-28 05:11:50.297126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.730 qpair failed and we were unable to recover it. 00:35:59.730 [2024-10-28 05:11:50.306992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.730 [2024-10-28 05:11:50.307127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.730 [2024-10-28 05:11:50.307160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.730 [2024-10-28 05:11:50.307176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.730 [2024-10-28 05:11:50.307189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.730 [2024-10-28 05:11:50.307221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.730 qpair failed and we were unable to recover it. 00:35:59.731 [2024-10-28 05:11:50.316865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.731 [2024-10-28 05:11:50.316969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.731 [2024-10-28 05:11:50.316996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.731 [2024-10-28 05:11:50.317012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.731 [2024-10-28 05:11:50.317026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.731 [2024-10-28 05:11:50.317056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.731 qpair failed and we were unable to recover it. 00:35:59.990 [2024-10-28 05:11:50.326897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.990 [2024-10-28 05:11:50.327005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.991 [2024-10-28 05:11:50.327031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.991 [2024-10-28 05:11:50.327048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.991 [2024-10-28 05:11:50.327062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.991 [2024-10-28 05:11:50.327093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.991 qpair failed and we were unable to recover it. 00:35:59.991 [2024-10-28 05:11:50.336879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.991 [2024-10-28 05:11:50.336992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.991 [2024-10-28 05:11:50.337018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.991 [2024-10-28 05:11:50.337033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.991 [2024-10-28 05:11:50.337047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.991 [2024-10-28 05:11:50.337078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.991 qpair failed and we were unable to recover it. 00:35:59.991 [2024-10-28 05:11:50.346888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.991 [2024-10-28 05:11:50.347000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.991 [2024-10-28 05:11:50.347028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.991 [2024-10-28 05:11:50.347043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.991 [2024-10-28 05:11:50.347062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.991 [2024-10-28 05:11:50.347094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.991 qpair failed and we were unable to recover it. 00:35:59.991 [2024-10-28 05:11:50.356918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.991 [2024-10-28 05:11:50.357031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.991 [2024-10-28 05:11:50.357058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.991 [2024-10-28 05:11:50.357072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.991 [2024-10-28 05:11:50.357086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.991 [2024-10-28 05:11:50.357116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.991 qpair failed and we were unable to recover it. 00:35:59.991 [2024-10-28 05:11:50.366895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.991 [2024-10-28 05:11:50.367005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.991 [2024-10-28 05:11:50.367031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.991 [2024-10-28 05:11:50.367046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.991 [2024-10-28 05:11:50.367059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.991 [2024-10-28 05:11:50.367090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.991 qpair failed and we were unable to recover it. 00:35:59.991 [2024-10-28 05:11:50.376931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.991 [2024-10-28 05:11:50.377049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.991 [2024-10-28 05:11:50.377077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.991 [2024-10-28 05:11:50.377092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.991 [2024-10-28 05:11:50.377106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.991 [2024-10-28 05:11:50.377150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.991 qpair failed and we were unable to recover it. 00:35:59.991 [2024-10-28 05:11:50.386888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.991 [2024-10-28 05:11:50.387019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.991 [2024-10-28 05:11:50.387046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.991 [2024-10-28 05:11:50.387068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.991 [2024-10-28 05:11:50.387082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.991 [2024-10-28 05:11:50.387112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.991 qpair failed and we were unable to recover it. 00:35:59.991 [2024-10-28 05:11:50.396918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.991 [2024-10-28 05:11:50.397044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.991 [2024-10-28 05:11:50.397072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.991 [2024-10-28 05:11:50.397087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.991 [2024-10-28 05:11:50.397101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.991 [2024-10-28 05:11:50.397133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.991 qpair failed and we were unable to recover it. 00:35:59.991 [2024-10-28 05:11:50.406963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.991 [2024-10-28 05:11:50.407084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.991 [2024-10-28 05:11:50.407110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.991 [2024-10-28 05:11:50.407125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.991 [2024-10-28 05:11:50.407139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.991 [2024-10-28 05:11:50.407171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.991 qpair failed and we were unable to recover it. 00:35:59.991 [2024-10-28 05:11:50.416965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.991 [2024-10-28 05:11:50.417102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.991 [2024-10-28 05:11:50.417128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.991 [2024-10-28 05:11:50.417144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.991 [2024-10-28 05:11:50.417161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.991 [2024-10-28 05:11:50.417194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.991 qpair failed and we were unable to recover it. 00:35:59.991 [2024-10-28 05:11:50.426933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.992 [2024-10-28 05:11:50.427038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.992 [2024-10-28 05:11:50.427065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.992 [2024-10-28 05:11:50.427079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.992 [2024-10-28 05:11:50.427093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.992 [2024-10-28 05:11:50.427125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.992 qpair failed and we were unable to recover it. 00:35:59.992 [2024-10-28 05:11:50.436963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.992 [2024-10-28 05:11:50.437120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.992 [2024-10-28 05:11:50.437155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.992 [2024-10-28 05:11:50.437174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.992 [2024-10-28 05:11:50.437205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.992 [2024-10-28 05:11:50.437236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.992 qpair failed and we were unable to recover it. 00:35:59.992 [2024-10-28 05:11:50.446954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.992 [2024-10-28 05:11:50.447070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.992 [2024-10-28 05:11:50.447107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.992 [2024-10-28 05:11:50.447123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.992 [2024-10-28 05:11:50.447137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.992 [2024-10-28 05:11:50.447168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.992 qpair failed and we were unable to recover it. 00:35:59.992 [2024-10-28 05:11:50.457032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.992 [2024-10-28 05:11:50.457148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.992 [2024-10-28 05:11:50.457174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.992 [2024-10-28 05:11:50.457188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.992 [2024-10-28 05:11:50.457201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.992 [2024-10-28 05:11:50.457233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.992 qpair failed and we were unable to recover it. 00:35:59.992 [2024-10-28 05:11:50.466938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.992 [2024-10-28 05:11:50.467049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.992 [2024-10-28 05:11:50.467077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.992 [2024-10-28 05:11:50.467091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.992 [2024-10-28 05:11:50.467104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.992 [2024-10-28 05:11:50.467135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.992 qpair failed and we were unable to recover it. 00:35:59.992 [2024-10-28 05:11:50.476959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.992 [2024-10-28 05:11:50.477065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.992 [2024-10-28 05:11:50.477092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.992 [2024-10-28 05:11:50.477107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.992 [2024-10-28 05:11:50.477127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.992 [2024-10-28 05:11:50.477179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.992 qpair failed and we were unable to recover it. 00:35:59.992 [2024-10-28 05:11:50.487034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.992 [2024-10-28 05:11:50.487148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.992 [2024-10-28 05:11:50.487175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.992 [2024-10-28 05:11:50.487190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.992 [2024-10-28 05:11:50.487203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.992 [2024-10-28 05:11:50.487234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.992 qpair failed and we were unable to recover it. 00:35:59.992 [2024-10-28 05:11:50.496980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.992 [2024-10-28 05:11:50.497137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.992 [2024-10-28 05:11:50.497164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.992 [2024-10-28 05:11:50.497178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.992 [2024-10-28 05:11:50.497192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.992 [2024-10-28 05:11:50.497222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.992 qpair failed and we were unable to recover it. 00:35:59.992 [2024-10-28 05:11:50.507008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.992 [2024-10-28 05:11:50.507121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.992 [2024-10-28 05:11:50.507147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.992 [2024-10-28 05:11:50.507162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.992 [2024-10-28 05:11:50.507176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.992 [2024-10-28 05:11:50.507207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.992 qpair failed and we were unable to recover it. 00:35:59.992 [2024-10-28 05:11:50.516973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.992 [2024-10-28 05:11:50.517120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.992 [2024-10-28 05:11:50.517146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.992 [2024-10-28 05:11:50.517161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.992 [2024-10-28 05:11:50.517174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.992 [2024-10-28 05:11:50.517207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.992 qpair failed and we were unable to recover it. 00:35:59.993 [2024-10-28 05:11:50.527000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.993 [2024-10-28 05:11:50.527117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.993 [2024-10-28 05:11:50.527143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.993 [2024-10-28 05:11:50.527158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.993 [2024-10-28 05:11:50.527172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.993 [2024-10-28 05:11:50.527202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.993 qpair failed and we were unable to recover it. 00:35:59.993 [2024-10-28 05:11:50.536961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.993 [2024-10-28 05:11:50.537077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.993 [2024-10-28 05:11:50.537103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.993 [2024-10-28 05:11:50.537118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.993 [2024-10-28 05:11:50.537131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.993 [2024-10-28 05:11:50.537162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.993 qpair failed and we were unable to recover it. 00:35:59.993 [2024-10-28 05:11:50.546990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.993 [2024-10-28 05:11:50.547098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.993 [2024-10-28 05:11:50.547124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.993 [2024-10-28 05:11:50.547139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.993 [2024-10-28 05:11:50.547152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.993 [2024-10-28 05:11:50.547183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.993 qpair failed and we were unable to recover it. 00:35:59.993 [2024-10-28 05:11:50.557016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.993 [2024-10-28 05:11:50.557175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.993 [2024-10-28 05:11:50.557202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.993 [2024-10-28 05:11:50.557222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.993 [2024-10-28 05:11:50.557239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.993 [2024-10-28 05:11:50.557287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.993 qpair failed and we were unable to recover it. 00:35:59.993 [2024-10-28 05:11:50.567001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.993 [2024-10-28 05:11:50.567122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.993 [2024-10-28 05:11:50.567149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.993 [2024-10-28 05:11:50.567163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.993 [2024-10-28 05:11:50.567177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.993 [2024-10-28 05:11:50.567207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.993 qpair failed and we were unable to recover it. 00:35:59.993 [2024-10-28 05:11:50.577074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.993 [2024-10-28 05:11:50.577184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.993 [2024-10-28 05:11:50.577211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.993 [2024-10-28 05:11:50.577226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.993 [2024-10-28 05:11:50.577239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:35:59.993 [2024-10-28 05:11:50.577269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:59.993 qpair failed and we were unable to recover it. 00:36:00.253 [2024-10-28 05:11:50.587099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.253 [2024-10-28 05:11:50.587213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.253 [2024-10-28 05:11:50.587239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.253 [2024-10-28 05:11:50.587254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.253 [2024-10-28 05:11:50.587267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.253 [2024-10-28 05:11:50.587298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.253 qpair failed and we were unable to recover it. 00:36:00.253 [2024-10-28 05:11:50.596999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.253 [2024-10-28 05:11:50.597114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.253 [2024-10-28 05:11:50.597141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.253 [2024-10-28 05:11:50.597155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.253 [2024-10-28 05:11:50.597169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.253 [2024-10-28 05:11:50.597200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.253 qpair failed and we were unable to recover it. 00:36:00.253 [2024-10-28 05:11:50.607005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.253 [2024-10-28 05:11:50.607130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.253 [2024-10-28 05:11:50.607157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.254 [2024-10-28 05:11:50.607177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.254 [2024-10-28 05:11:50.607192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.254 [2024-10-28 05:11:50.607224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.254 qpair failed and we were unable to recover it. 00:36:00.254 [2024-10-28 05:11:50.617032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.254 [2024-10-28 05:11:50.617153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.254 [2024-10-28 05:11:50.617178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.254 [2024-10-28 05:11:50.617193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.254 [2024-10-28 05:11:50.617206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.254 [2024-10-28 05:11:50.617239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.254 qpair failed and we were unable to recover it. 00:36:00.254 [2024-10-28 05:11:50.627090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.254 [2024-10-28 05:11:50.627198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.254 [2024-10-28 05:11:50.627225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.254 [2024-10-28 05:11:50.627249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.254 [2024-10-28 05:11:50.627263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.254 [2024-10-28 05:11:50.627294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.254 qpair failed and we were unable to recover it. 00:36:00.254 [2024-10-28 05:11:50.637032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.254 [2024-10-28 05:11:50.637145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.254 [2024-10-28 05:11:50.637171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.254 [2024-10-28 05:11:50.637187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.254 [2024-10-28 05:11:50.637200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.254 [2024-10-28 05:11:50.637243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.254 qpair failed and we were unable to recover it. 00:36:00.254 [2024-10-28 05:11:50.647051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.254 [2024-10-28 05:11:50.647165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.254 [2024-10-28 05:11:50.647190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.254 [2024-10-28 05:11:50.647205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.254 [2024-10-28 05:11:50.647218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.254 [2024-10-28 05:11:50.647256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.254 qpair failed and we were unable to recover it. 00:36:00.254 [2024-10-28 05:11:50.657068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.254 [2024-10-28 05:11:50.657221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.254 [2024-10-28 05:11:50.657248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.254 [2024-10-28 05:11:50.657264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.254 [2024-10-28 05:11:50.657278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.254 [2024-10-28 05:11:50.657324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.254 qpair failed and we were unable to recover it. 00:36:00.254 [2024-10-28 05:11:50.667049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.254 [2024-10-28 05:11:50.667164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.254 [2024-10-28 05:11:50.667190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.254 [2024-10-28 05:11:50.667205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.254 [2024-10-28 05:11:50.667218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.254 [2024-10-28 05:11:50.667262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.254 qpair failed and we were unable to recover it. 00:36:00.254 [2024-10-28 05:11:50.677055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.254 [2024-10-28 05:11:50.677169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.254 [2024-10-28 05:11:50.677194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.254 [2024-10-28 05:11:50.677209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.254 [2024-10-28 05:11:50.677222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.254 [2024-10-28 05:11:50.677280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.254 qpair failed and we were unable to recover it. 00:36:00.254 [2024-10-28 05:11:50.687125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.254 [2024-10-28 05:11:50.687241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.254 [2024-10-28 05:11:50.687266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.254 [2024-10-28 05:11:50.687281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.254 [2024-10-28 05:11:50.687295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.254 [2024-10-28 05:11:50.687326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.254 qpair failed and we were unable to recover it. 00:36:00.254 [2024-10-28 05:11:50.697122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.254 [2024-10-28 05:11:50.697236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.254 [2024-10-28 05:11:50.697262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.254 [2024-10-28 05:11:50.697277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.254 [2024-10-28 05:11:50.697290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.254 [2024-10-28 05:11:50.697322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.254 qpair failed and we were unable to recover it. 00:36:00.254 [2024-10-28 05:11:50.707054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.254 [2024-10-28 05:11:50.707156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.254 [2024-10-28 05:11:50.707181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.254 [2024-10-28 05:11:50.707196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.254 [2024-10-28 05:11:50.707209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.254 [2024-10-28 05:11:50.707239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.254 qpair failed and we were unable to recover it. 00:36:00.254 [2024-10-28 05:11:50.717060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.254 [2024-10-28 05:11:50.717164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.254 [2024-10-28 05:11:50.717188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.254 [2024-10-28 05:11:50.717203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.254 [2024-10-28 05:11:50.717217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.254 [2024-10-28 05:11:50.717247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.254 qpair failed and we were unable to recover it. 00:36:00.254 [2024-10-28 05:11:50.727075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.254 [2024-10-28 05:11:50.727188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.254 [2024-10-28 05:11:50.727213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.254 [2024-10-28 05:11:50.727227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.254 [2024-10-28 05:11:50.727241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.254 [2024-10-28 05:11:50.727272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.254 qpair failed and we were unable to recover it. 00:36:00.254 [2024-10-28 05:11:50.737104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.254 [2024-10-28 05:11:50.737226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.254 [2024-10-28 05:11:50.737262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.254 [2024-10-28 05:11:50.737283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.254 [2024-10-28 05:11:50.737299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.254 [2024-10-28 05:11:50.737331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.254 qpair failed and we were unable to recover it. 00:36:00.254 [2024-10-28 05:11:50.747078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.255 [2024-10-28 05:11:50.747191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.255 [2024-10-28 05:11:50.747216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.255 [2024-10-28 05:11:50.747231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.255 [2024-10-28 05:11:50.747244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.255 [2024-10-28 05:11:50.747275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.255 qpair failed and we were unable to recover it. 00:36:00.255 [2024-10-28 05:11:50.757064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.255 [2024-10-28 05:11:50.757171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.255 [2024-10-28 05:11:50.757196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.255 [2024-10-28 05:11:50.757211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.255 [2024-10-28 05:11:50.757224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.255 [2024-10-28 05:11:50.757255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.255 qpair failed and we were unable to recover it. 00:36:00.255 [2024-10-28 05:11:50.767077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.255 [2024-10-28 05:11:50.767191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.255 [2024-10-28 05:11:50.767216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.255 [2024-10-28 05:11:50.767231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.255 [2024-10-28 05:11:50.767245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.255 [2024-10-28 05:11:50.767275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.255 qpair failed and we were unable to recover it. 00:36:00.255 [2024-10-28 05:11:50.777072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.255 [2024-10-28 05:11:50.777184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.255 [2024-10-28 05:11:50.777209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.255 [2024-10-28 05:11:50.777224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.255 [2024-10-28 05:11:50.777238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.255 [2024-10-28 05:11:50.777274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.255 qpair failed and we were unable to recover it. 00:36:00.255 [2024-10-28 05:11:50.787143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.255 [2024-10-28 05:11:50.787289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.255 [2024-10-28 05:11:50.787318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.255 [2024-10-28 05:11:50.787333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.255 [2024-10-28 05:11:50.787347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.255 [2024-10-28 05:11:50.787391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.255 qpair failed and we were unable to recover it. 00:36:00.255 [2024-10-28 05:11:50.797073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.255 [2024-10-28 05:11:50.797175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.255 [2024-10-28 05:11:50.797200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.255 [2024-10-28 05:11:50.797215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.255 [2024-10-28 05:11:50.797229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.255 [2024-10-28 05:11:50.797260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.255 qpair failed and we were unable to recover it. 00:36:00.255 [2024-10-28 05:11:50.807095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.255 [2024-10-28 05:11:50.807221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.255 [2024-10-28 05:11:50.807247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.255 [2024-10-28 05:11:50.807262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.255 [2024-10-28 05:11:50.807276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.255 [2024-10-28 05:11:50.807308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.255 qpair failed and we were unable to recover it. 00:36:00.255 [2024-10-28 05:11:50.817141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.255 [2024-10-28 05:11:50.817302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.255 [2024-10-28 05:11:50.817330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.255 [2024-10-28 05:11:50.817345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.255 [2024-10-28 05:11:50.817359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.255 [2024-10-28 05:11:50.817406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.255 qpair failed and we were unable to recover it. 00:36:00.255 [2024-10-28 05:11:50.827160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.255 [2024-10-28 05:11:50.827277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.255 [2024-10-28 05:11:50.827313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.255 [2024-10-28 05:11:50.827329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.255 [2024-10-28 05:11:50.827342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.255 [2024-10-28 05:11:50.827388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.255 qpair failed and we were unable to recover it. 00:36:00.255 [2024-10-28 05:11:50.837169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.255 [2024-10-28 05:11:50.837321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.255 [2024-10-28 05:11:50.837348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.255 [2024-10-28 05:11:50.837363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.255 [2024-10-28 05:11:50.837378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.255 [2024-10-28 05:11:50.837408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.255 qpair failed and we were unable to recover it. 00:36:00.515 [2024-10-28 05:11:50.847155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.515 [2024-10-28 05:11:50.847315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.515 [2024-10-28 05:11:50.847344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.515 [2024-10-28 05:11:50.847364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.515 [2024-10-28 05:11:50.847379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.515 [2024-10-28 05:11:50.847423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.515 qpair failed and we were unable to recover it. 00:36:00.515 [2024-10-28 05:11:50.857093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.515 [2024-10-28 05:11:50.857205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.515 [2024-10-28 05:11:50.857231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.515 [2024-10-28 05:11:50.857246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.515 [2024-10-28 05:11:50.857260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.515 [2024-10-28 05:11:50.857292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.515 qpair failed and we were unable to recover it. 00:36:00.515 [2024-10-28 05:11:50.867199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.515 [2024-10-28 05:11:50.867312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.515 [2024-10-28 05:11:50.867342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.515 [2024-10-28 05:11:50.867359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.515 [2024-10-28 05:11:50.867372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.515 [2024-10-28 05:11:50.867404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.515 qpair failed and we were unable to recover it. 00:36:00.515 [2024-10-28 05:11:50.877101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.515 [2024-10-28 05:11:50.877214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.515 [2024-10-28 05:11:50.877239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.515 [2024-10-28 05:11:50.877254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.515 [2024-10-28 05:11:50.877267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.515 [2024-10-28 05:11:50.877298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.515 qpair failed and we were unable to recover it. 00:36:00.515 [2024-10-28 05:11:50.887171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.515 [2024-10-28 05:11:50.887294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.515 [2024-10-28 05:11:50.887322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.515 [2024-10-28 05:11:50.887341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.515 [2024-10-28 05:11:50.887355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.515 [2024-10-28 05:11:50.887388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.515 qpair failed and we were unable to recover it. 00:36:00.515 [2024-10-28 05:11:50.897148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.515 [2024-10-28 05:11:50.897278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.515 [2024-10-28 05:11:50.897306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.515 [2024-10-28 05:11:50.897321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.515 [2024-10-28 05:11:50.897335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.515 [2024-10-28 05:11:50.897366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.515 qpair failed and we were unable to recover it. 00:36:00.515 [2024-10-28 05:11:50.907114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.515 [2024-10-28 05:11:50.907260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.515 [2024-10-28 05:11:50.907287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.515 [2024-10-28 05:11:50.907303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.515 [2024-10-28 05:11:50.907322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.515 [2024-10-28 05:11:50.907354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.515 qpair failed and we were unable to recover it. 00:36:00.515 [2024-10-28 05:11:50.917155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.516 [2024-10-28 05:11:50.917278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.516 [2024-10-28 05:11:50.917305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.516 [2024-10-28 05:11:50.917320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.516 [2024-10-28 05:11:50.917334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.516 [2024-10-28 05:11:50.917365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.516 qpair failed and we were unable to recover it. 00:36:00.516 [2024-10-28 05:11:50.927224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.516 [2024-10-28 05:11:50.927341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.516 [2024-10-28 05:11:50.927367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.516 [2024-10-28 05:11:50.927382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.516 [2024-10-28 05:11:50.927396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.516 [2024-10-28 05:11:50.927427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.516 qpair failed and we were unable to recover it. 00:36:00.516 [2024-10-28 05:11:50.937151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.516 [2024-10-28 05:11:50.937259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.516 [2024-10-28 05:11:50.937283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.516 [2024-10-28 05:11:50.937298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.516 [2024-10-28 05:11:50.937312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.516 [2024-10-28 05:11:50.937344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.516 qpair failed and we were unable to recover it. 00:36:00.516 [2024-10-28 05:11:50.947182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.516 [2024-10-28 05:11:50.947337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.516 [2024-10-28 05:11:50.947364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.516 [2024-10-28 05:11:50.947380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.516 [2024-10-28 05:11:50.947393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.516 [2024-10-28 05:11:50.947423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.516 qpair failed and we were unable to recover it. 00:36:00.516 [2024-10-28 05:11:50.957154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.516 [2024-10-28 05:11:50.957303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.516 [2024-10-28 05:11:50.957331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.516 [2024-10-28 05:11:50.957346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.516 [2024-10-28 05:11:50.957359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.516 [2024-10-28 05:11:50.957391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.516 qpair failed and we were unable to recover it. 00:36:00.516 [2024-10-28 05:11:50.967183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.516 [2024-10-28 05:11:50.967302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.516 [2024-10-28 05:11:50.967328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.516 [2024-10-28 05:11:50.967343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.516 [2024-10-28 05:11:50.967361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.516 [2024-10-28 05:11:50.967392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.516 qpair failed and we were unable to recover it. 00:36:00.516 [2024-10-28 05:11:50.977173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.516 [2024-10-28 05:11:50.977285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.516 [2024-10-28 05:11:50.977310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.516 [2024-10-28 05:11:50.977325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.516 [2024-10-28 05:11:50.977338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.516 [2024-10-28 05:11:50.977371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.516 qpair failed and we were unable to recover it. 00:36:00.516 [2024-10-28 05:11:50.987146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.516 [2024-10-28 05:11:50.987258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.516 [2024-10-28 05:11:50.987283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.516 [2024-10-28 05:11:50.987298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.516 [2024-10-28 05:11:50.987311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.516 [2024-10-28 05:11:50.987342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.516 qpair failed and we were unable to recover it. 00:36:00.516 [2024-10-28 05:11:50.997199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.516 [2024-10-28 05:11:50.997306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.516 [2024-10-28 05:11:50.997336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.516 [2024-10-28 05:11:50.997352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.516 [2024-10-28 05:11:50.997365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.516 [2024-10-28 05:11:50.997396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.516 qpair failed and we were unable to recover it. 00:36:00.516 [2024-10-28 05:11:51.007175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.516 [2024-10-28 05:11:51.007290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.516 [2024-10-28 05:11:51.007316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.516 [2024-10-28 05:11:51.007330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.516 [2024-10-28 05:11:51.007345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.516 [2024-10-28 05:11:51.007375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.516 qpair failed and we were unable to recover it. 00:36:00.516 [2024-10-28 05:11:51.017264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.516 [2024-10-28 05:11:51.017384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.516 [2024-10-28 05:11:51.017411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.516 [2024-10-28 05:11:51.017426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.516 [2024-10-28 05:11:51.017440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.516 [2024-10-28 05:11:51.017471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.516 qpair failed and we were unable to recover it. 00:36:00.516 [2024-10-28 05:11:51.027243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.516 [2024-10-28 05:11:51.027370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.516 [2024-10-28 05:11:51.027398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.516 [2024-10-28 05:11:51.027413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.516 [2024-10-28 05:11:51.027427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.516 [2024-10-28 05:11:51.027470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.516 qpair failed and we were unable to recover it. 00:36:00.516 [2024-10-28 05:11:51.037251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.516 [2024-10-28 05:11:51.037363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.516 [2024-10-28 05:11:51.037388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.516 [2024-10-28 05:11:51.037403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.516 [2024-10-28 05:11:51.037423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.516 [2024-10-28 05:11:51.037456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.516 qpair failed and we were unable to recover it. 00:36:00.516 [2024-10-28 05:11:51.047218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.516 [2024-10-28 05:11:51.047377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.516 [2024-10-28 05:11:51.047405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.516 [2024-10-28 05:11:51.047422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.516 [2024-10-28 05:11:51.047439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.517 [2024-10-28 05:11:51.047472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.517 qpair failed and we were unable to recover it. 00:36:00.517 [2024-10-28 05:11:51.057206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.517 [2024-10-28 05:11:51.057317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.517 [2024-10-28 05:11:51.057342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.517 [2024-10-28 05:11:51.057356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.517 [2024-10-28 05:11:51.057370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.517 [2024-10-28 05:11:51.057401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.517 qpair failed and we were unable to recover it. 00:36:00.517 [2024-10-28 05:11:51.067208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.517 [2024-10-28 05:11:51.067313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.517 [2024-10-28 05:11:51.067339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.517 [2024-10-28 05:11:51.067355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.517 [2024-10-28 05:11:51.067368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.517 [2024-10-28 05:11:51.067413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.517 qpair failed and we were unable to recover it. 00:36:00.517 [2024-10-28 05:11:51.077220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.517 [2024-10-28 05:11:51.077362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.517 [2024-10-28 05:11:51.077389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.517 [2024-10-28 05:11:51.077405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.517 [2024-10-28 05:11:51.077418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.517 [2024-10-28 05:11:51.077449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.517 qpair failed and we were unable to recover it. 00:36:00.517 [2024-10-28 05:11:51.087254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.517 [2024-10-28 05:11:51.087373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.517 [2024-10-28 05:11:51.087400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.517 [2024-10-28 05:11:51.087415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.517 [2024-10-28 05:11:51.087429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.517 [2024-10-28 05:11:51.087460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.517 qpair failed and we were unable to recover it. 00:36:00.517 [2024-10-28 05:11:51.097248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.517 [2024-10-28 05:11:51.097367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.517 [2024-10-28 05:11:51.097392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.517 [2024-10-28 05:11:51.097407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.517 [2024-10-28 05:11:51.097421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.517 [2024-10-28 05:11:51.097452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.517 qpair failed and we were unable to recover it. 00:36:00.517 [2024-10-28 05:11:51.107250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.517 [2024-10-28 05:11:51.107378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.517 [2024-10-28 05:11:51.107405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.517 [2024-10-28 05:11:51.107420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.517 [2024-10-28 05:11:51.107433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.517 [2024-10-28 05:11:51.107464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.517 qpair failed and we were unable to recover it. 00:36:00.776 [2024-10-28 05:11:51.117250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.776 [2024-10-28 05:11:51.117386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.776 [2024-10-28 05:11:51.117413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.776 [2024-10-28 05:11:51.117428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.776 [2024-10-28 05:11:51.117441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.776 [2024-10-28 05:11:51.117472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-10-28 05:11:51.127253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.776 [2024-10-28 05:11:51.127375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.776 [2024-10-28 05:11:51.127400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.776 [2024-10-28 05:11:51.127415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.776 [2024-10-28 05:11:51.127429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.777 [2024-10-28 05:11:51.127460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-10-28 05:11:51.137249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.777 [2024-10-28 05:11:51.137371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.777 [2024-10-28 05:11:51.137406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.777 [2024-10-28 05:11:51.137421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.777 [2024-10-28 05:11:51.137435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.777 [2024-10-28 05:11:51.137466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-10-28 05:11:51.147227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.777 [2024-10-28 05:11:51.147337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.777 [2024-10-28 05:11:51.147362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.777 [2024-10-28 05:11:51.147377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.777 [2024-10-28 05:11:51.147392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.777 [2024-10-28 05:11:51.147423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-10-28 05:11:51.157270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.777 [2024-10-28 05:11:51.157387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.777 [2024-10-28 05:11:51.157413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.777 [2024-10-28 05:11:51.157428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.777 [2024-10-28 05:11:51.157446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.777 [2024-10-28 05:11:51.157477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-10-28 05:11:51.167268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.777 [2024-10-28 05:11:51.167385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.777 [2024-10-28 05:11:51.167410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.777 [2024-10-28 05:11:51.167430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.777 [2024-10-28 05:11:51.167445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.777 [2024-10-28 05:11:51.167476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-10-28 05:11:51.177365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.777 [2024-10-28 05:11:51.177484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.777 [2024-10-28 05:11:51.177509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.777 [2024-10-28 05:11:51.177539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.777 [2024-10-28 05:11:51.177552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.777 [2024-10-28 05:11:51.177597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-10-28 05:11:51.187319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.777 [2024-10-28 05:11:51.187466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.777 [2024-10-28 05:11:51.187494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.777 [2024-10-28 05:11:51.187509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.777 [2024-10-28 05:11:51.187523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.777 [2024-10-28 05:11:51.187554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-10-28 05:11:51.197277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.777 [2024-10-28 05:11:51.197384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.777 [2024-10-28 05:11:51.197409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.777 [2024-10-28 05:11:51.197424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.777 [2024-10-28 05:11:51.197437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.777 [2024-10-28 05:11:51.197469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-10-28 05:11:51.207308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.777 [2024-10-28 05:11:51.207428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.777 [2024-10-28 05:11:51.207452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.777 [2024-10-28 05:11:51.207468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.777 [2024-10-28 05:11:51.207482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.777 [2024-10-28 05:11:51.207519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-10-28 05:11:51.217365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.777 [2024-10-28 05:11:51.217506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.777 [2024-10-28 05:11:51.217534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.777 [2024-10-28 05:11:51.217549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.777 [2024-10-28 05:11:51.217563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.777 [2024-10-28 05:11:51.217593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-10-28 05:11:51.227336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.777 [2024-10-28 05:11:51.227441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.777 [2024-10-28 05:11:51.227467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.777 [2024-10-28 05:11:51.227482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.777 [2024-10-28 05:11:51.227497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.777 [2024-10-28 05:11:51.227527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-10-28 05:11:51.237281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.777 [2024-10-28 05:11:51.237385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.777 [2024-10-28 05:11:51.237412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.777 [2024-10-28 05:11:51.237428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.777 [2024-10-28 05:11:51.237444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.777 [2024-10-28 05:11:51.237477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-10-28 05:11:51.247265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.777 [2024-10-28 05:11:51.247383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.777 [2024-10-28 05:11:51.247410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.777 [2024-10-28 05:11:51.247425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.777 [2024-10-28 05:11:51.247438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.777 [2024-10-28 05:11:51.247469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-10-28 05:11:51.257296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.777 [2024-10-28 05:11:51.257414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.777 [2024-10-28 05:11:51.257440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.777 [2024-10-28 05:11:51.257454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.777 [2024-10-28 05:11:51.257470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.777 [2024-10-28 05:11:51.257501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-10-28 05:11:51.267398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.777 [2024-10-28 05:11:51.267541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.778 [2024-10-28 05:11:51.267568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.778 [2024-10-28 05:11:51.267583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.778 [2024-10-28 05:11:51.267597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.778 [2024-10-28 05:11:51.267649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-10-28 05:11:51.277296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.778 [2024-10-28 05:11:51.277401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.778 [2024-10-28 05:11:51.277427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.778 [2024-10-28 05:11:51.277442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.778 [2024-10-28 05:11:51.277455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.778 [2024-10-28 05:11:51.277486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-10-28 05:11:51.287323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.778 [2024-10-28 05:11:51.287437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.778 [2024-10-28 05:11:51.287464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.778 [2024-10-28 05:11:51.287479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.778 [2024-10-28 05:11:51.287492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.778 [2024-10-28 05:11:51.287525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-10-28 05:11:51.297314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.778 [2024-10-28 05:11:51.297432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.778 [2024-10-28 05:11:51.297464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.778 [2024-10-28 05:11:51.297480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.778 [2024-10-28 05:11:51.297495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.778 [2024-10-28 05:11:51.297539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-10-28 05:11:51.307287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.778 [2024-10-28 05:11:51.307399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.778 [2024-10-28 05:11:51.307426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.778 [2024-10-28 05:11:51.307440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.778 [2024-10-28 05:11:51.307454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.778 [2024-10-28 05:11:51.307486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-10-28 05:11:51.317386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.778 [2024-10-28 05:11:51.317494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.778 [2024-10-28 05:11:51.317520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.778 [2024-10-28 05:11:51.317535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.778 [2024-10-28 05:11:51.317550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.778 [2024-10-28 05:11:51.317581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-10-28 05:11:51.327370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.778 [2024-10-28 05:11:51.327493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.778 [2024-10-28 05:11:51.327519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.778 [2024-10-28 05:11:51.327535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.778 [2024-10-28 05:11:51.327550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.778 [2024-10-28 05:11:51.327580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-10-28 05:11:51.337320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.778 [2024-10-28 05:11:51.337426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.778 [2024-10-28 05:11:51.337453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.778 [2024-10-28 05:11:51.337468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.778 [2024-10-28 05:11:51.337481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.778 [2024-10-28 05:11:51.337518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-10-28 05:11:51.347375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.778 [2024-10-28 05:11:51.347528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.778 [2024-10-28 05:11:51.347554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.778 [2024-10-28 05:11:51.347569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.778 [2024-10-28 05:11:51.347582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.778 [2024-10-28 05:11:51.347613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-10-28 05:11:51.357348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.778 [2024-10-28 05:11:51.357482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.778 [2024-10-28 05:11:51.357509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.778 [2024-10-28 05:11:51.357524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.778 [2024-10-28 05:11:51.357537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.778 [2024-10-28 05:11:51.357570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-10-28 05:11:51.367371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.778 [2024-10-28 05:11:51.367493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.778 [2024-10-28 05:11:51.367519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.778 [2024-10-28 05:11:51.367535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.778 [2024-10-28 05:11:51.367549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:00.778 [2024-10-28 05:11:51.367580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.778 qpair failed and we were unable to recover it. 00:36:01.038 [2024-10-28 05:11:51.377325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.038 [2024-10-28 05:11:51.377435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.038 [2024-10-28 05:11:51.377462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.038 [2024-10-28 05:11:51.377477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.038 [2024-10-28 05:11:51.377492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:01.038 [2024-10-28 05:11:51.377523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.038 qpair failed and we were unable to recover it. 00:36:01.038 [2024-10-28 05:11:51.387347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.038 [2024-10-28 05:11:51.387452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.038 [2024-10-28 05:11:51.387479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.038 [2024-10-28 05:11:51.387494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.038 [2024-10-28 05:11:51.387507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:01.038 [2024-10-28 05:11:51.387551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.038 qpair failed and we were unable to recover it. 00:36:01.038 [2024-10-28 05:11:51.397337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.038 [2024-10-28 05:11:51.397444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.038 [2024-10-28 05:11:51.397471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.038 [2024-10-28 05:11:51.397486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.038 [2024-10-28 05:11:51.397504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:01.038 [2024-10-28 05:11:51.397535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.038 qpair failed and we were unable to recover it. 00:36:01.038 [2024-10-28 05:11:51.407379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.038 [2024-10-28 05:11:51.407492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.038 [2024-10-28 05:11:51.407518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.038 [2024-10-28 05:11:51.407534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.038 [2024-10-28 05:11:51.407547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:01.038 [2024-10-28 05:11:51.407578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.038 qpair failed and we were unable to recover it. 00:36:01.038 [2024-10-28 05:11:51.417346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.038 [2024-10-28 05:11:51.417456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.038 [2024-10-28 05:11:51.417482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.038 [2024-10-28 05:11:51.417497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.038 [2024-10-28 05:11:51.417510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:01.038 [2024-10-28 05:11:51.417542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.038 qpair failed and we were unable to recover it. 00:36:01.038 [2024-10-28 05:11:51.427379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.038 [2024-10-28 05:11:51.427490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.038 [2024-10-28 05:11:51.427524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.038 [2024-10-28 05:11:51.427540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.038 [2024-10-28 05:11:51.427554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:01.038 [2024-10-28 05:11:51.427585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.038 qpair failed and we were unable to recover it. 00:36:01.038 [2024-10-28 05:11:51.437369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.038 [2024-10-28 05:11:51.437498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.038 [2024-10-28 05:11:51.437525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.038 [2024-10-28 05:11:51.437540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.038 [2024-10-28 05:11:51.437553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:01.038 [2024-10-28 05:11:51.437584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.038 qpair failed and we were unable to recover it. 00:36:01.038 [2024-10-28 05:11:51.447390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.038 [2024-10-28 05:11:51.447503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.038 [2024-10-28 05:11:51.447529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.038 [2024-10-28 05:11:51.447544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.038 [2024-10-28 05:11:51.447559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:01.038 [2024-10-28 05:11:51.447589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.038 qpair failed and we were unable to recover it. 00:36:01.038 [2024-10-28 05:11:51.457405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.038 [2024-10-28 05:11:51.457533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.038 [2024-10-28 05:11:51.457559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.038 [2024-10-28 05:11:51.457574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.038 [2024-10-28 05:11:51.457587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:01.038 [2024-10-28 05:11:51.457618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.038 qpair failed and we were unable to recover it. 00:36:01.038 [2024-10-28 05:11:51.467367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.038 [2024-10-28 05:11:51.467472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.038 [2024-10-28 05:11:51.467499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.038 [2024-10-28 05:11:51.467515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.038 [2024-10-28 05:11:51.467535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:01.038 [2024-10-28 05:11:51.467566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.038 qpair failed and we were unable to recover it. 00:36:01.038 [2024-10-28 05:11:51.477452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.038 [2024-10-28 05:11:51.477563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.038 [2024-10-28 05:11:51.477604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.038 [2024-10-28 05:11:51.477620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.038 [2024-10-28 05:11:51.477643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:01.038 [2024-10-28 05:11:51.477693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.038 qpair failed and we were unable to recover it. 00:36:01.038 [2024-10-28 05:11:51.487387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.038 [2024-10-28 05:11:51.487504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.038 [2024-10-28 05:11:51.487531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.038 [2024-10-28 05:11:51.487547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.038 [2024-10-28 05:11:51.487562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:01.038 [2024-10-28 05:11:51.487592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.038 qpair failed and we were unable to recover it. 00:36:01.039 [2024-10-28 05:11:51.497407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.039 [2024-10-28 05:11:51.497534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.039 [2024-10-28 05:11:51.497561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.039 [2024-10-28 05:11:51.497577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.039 [2024-10-28 05:11:51.497590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f04000b90 00:36:01.039 [2024-10-28 05:11:51.497621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.039 qpair failed and we were unable to recover it. 00:36:01.039 [2024-10-28 05:11:51.507392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.039 [2024-10-28 05:11:51.507504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.039 [2024-10-28 05:11:51.507538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.039 [2024-10-28 05:11:51.507554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.039 [2024-10-28 05:11:51.507567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f10000b90 00:36:01.039 [2024-10-28 05:11:51.507601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.039 qpair failed and we were unable to recover it. 00:36:01.039 [2024-10-28 05:11:51.517434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.039 [2024-10-28 05:11:51.517544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.039 [2024-10-28 05:11:51.517572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.039 [2024-10-28 05:11:51.517588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.039 [2024-10-28 05:11:51.517601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f10000b90 00:36:01.039 [2024-10-28 05:11:51.517632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.039 qpair failed and we were unable to recover it. 00:36:01.039 [2024-10-28 05:11:51.527513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.039 [2024-10-28 05:11:51.527653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.039 [2024-10-28 05:11:51.527687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.039 [2024-10-28 05:11:51.527702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.039 [2024-10-28 05:11:51.527715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:36:01.039 [2024-10-28 05:11:51.527749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.039 qpair failed and we were unable to recover it. 00:36:01.039 [2024-10-28 05:11:51.537422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.039 [2024-10-28 05:11:51.537537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.039 [2024-10-28 05:11:51.537564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.039 [2024-10-28 05:11:51.537579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.039 [2024-10-28 05:11:51.537593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ac3390 00:36:01.039 [2024-10-28 05:11:51.537624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.039 qpair failed and we were unable to recover it. 00:36:01.039 [2024-10-28 05:11:51.537736] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:01.039 A controller has encountered a failure and is being reset. 00:36:01.039 [2024-10-28 05:11:51.547428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.039 [2024-10-28 05:11:51.547541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.039 [2024-10-28 05:11:51.547573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.039 [2024-10-28 05:11:51.547589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.039 [2024-10-28 05:11:51.547602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f08000b90 00:36:01.039 [2024-10-28 05:11:51.547655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.039 qpair failed and we were unable to recover it. 00:36:01.039 Controller properly reset. 00:36:01.039 Initializing NVMe Controllers 00:36:01.039 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:01.039 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:01.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:01.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:01.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:01.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:01.039 Initialization complete. Launching workers. 00:36:01.039 Starting thread on core 1 00:36:01.039 Starting thread on core 2 00:36:01.039 Starting thread on core 3 00:36:01.039 Starting thread on core 0 00:36:01.039 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:01.039 00:36:01.039 real 0m11.553s 00:36:01.039 user 0m21.335s 00:36:01.039 sys 0m5.245s 00:36:01.039 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:01.039 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.039 ************************************ 00:36:01.039 END TEST nvmf_target_disconnect_tc2 00:36:01.039 ************************************ 00:36:01.039 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:01.039 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:01.039 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:01.039 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:01.039 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:01.039 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:01.039 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:01.039 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:01.039 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:01.039 rmmod nvme_tcp 00:36:01.039 rmmod nvme_fabrics 00:36:01.298 rmmod nvme_keyring 00:36:01.298 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:01.298 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:01.298 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:01.298 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 2481487 ']' 00:36:01.298 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 2481487 00:36:01.298 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2481487 ']' 00:36:01.298 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2481487 00:36:01.298 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:36:01.298 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:01.298 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2481487 00:36:01.298 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:36:01.298 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:36:01.298 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2481487' 00:36:01.298 killing process with pid 2481487 00:36:01.298 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2481487 00:36:01.298 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2481487 00:36:01.557 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:01.557 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:01.557 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:01.557 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:01.557 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:36:01.557 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:01.557 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:36:01.557 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:01.557 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:01.557 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.557 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:01.557 05:11:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.465 05:11:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:03.465 00:36:03.465 real 0m16.655s 00:36:03.465 user 0m47.385s 00:36:03.465 sys 0m7.422s 00:36:03.465 05:11:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:03.465 05:11:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:03.465 ************************************ 00:36:03.465 END TEST nvmf_target_disconnect 00:36:03.465 ************************************ 00:36:03.465 05:11:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:03.465 00:36:03.465 real 7m5.036s 00:36:03.465 user 18m9.853s 00:36:03.465 sys 1m28.982s 00:36:03.465 05:11:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:03.465 05:11:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.465 ************************************ 00:36:03.465 END TEST nvmf_host 00:36:03.465 ************************************ 00:36:03.465 05:11:53 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:03.465 05:11:53 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:03.465 05:11:53 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:03.465 05:11:53 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:03.465 05:11:53 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:03.465 05:11:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:03.465 ************************************ 00:36:03.465 START TEST nvmf_target_core_interrupt_mode 00:36:03.465 ************************************ 00:36:03.465 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:03.725 * Looking for test storage... 00:36:03.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1689 -- # lcov --version 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:03.725 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:36:03.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.726 --rc genhtml_branch_coverage=1 00:36:03.726 --rc genhtml_function_coverage=1 00:36:03.726 --rc genhtml_legend=1 00:36:03.726 --rc geninfo_all_blocks=1 00:36:03.726 --rc geninfo_unexecuted_blocks=1 00:36:03.726 00:36:03.726 ' 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:36:03.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.726 --rc genhtml_branch_coverage=1 00:36:03.726 --rc genhtml_function_coverage=1 00:36:03.726 --rc genhtml_legend=1 00:36:03.726 --rc geninfo_all_blocks=1 00:36:03.726 --rc geninfo_unexecuted_blocks=1 00:36:03.726 00:36:03.726 ' 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:36:03.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.726 --rc genhtml_branch_coverage=1 00:36:03.726 --rc genhtml_function_coverage=1 00:36:03.726 --rc genhtml_legend=1 00:36:03.726 --rc geninfo_all_blocks=1 00:36:03.726 --rc geninfo_unexecuted_blocks=1 00:36:03.726 00:36:03.726 ' 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:36:03.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.726 --rc genhtml_branch_coverage=1 00:36:03.726 --rc genhtml_function_coverage=1 00:36:03.726 --rc genhtml_legend=1 00:36:03.726 --rc geninfo_all_blocks=1 00:36:03.726 --rc geninfo_unexecuted_blocks=1 00:36:03.726 00:36:03.726 ' 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:03.726 ************************************ 00:36:03.726 START TEST nvmf_abort 00:36:03.726 ************************************ 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:03.726 * Looking for test storage... 00:36:03.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1689 -- # lcov --version 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:03.726 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:03.727 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:03.727 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:03.727 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:03.727 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:03.727 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:36:03.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.727 --rc genhtml_branch_coverage=1 00:36:03.727 --rc genhtml_function_coverage=1 00:36:03.727 --rc genhtml_legend=1 00:36:03.727 --rc geninfo_all_blocks=1 00:36:03.727 --rc geninfo_unexecuted_blocks=1 00:36:03.727 00:36:03.727 ' 00:36:03.727 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:36:03.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.727 --rc genhtml_branch_coverage=1 00:36:03.727 --rc genhtml_function_coverage=1 00:36:03.727 --rc genhtml_legend=1 00:36:03.727 --rc geninfo_all_blocks=1 00:36:03.727 --rc geninfo_unexecuted_blocks=1 00:36:03.727 00:36:03.727 ' 00:36:03.727 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:36:03.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.727 --rc genhtml_branch_coverage=1 00:36:03.727 --rc genhtml_function_coverage=1 00:36:03.727 --rc genhtml_legend=1 00:36:03.727 --rc geninfo_all_blocks=1 00:36:03.727 --rc geninfo_unexecuted_blocks=1 00:36:03.727 00:36:03.727 ' 00:36:03.727 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:36:03.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.727 --rc genhtml_branch_coverage=1 00:36:03.727 --rc genhtml_function_coverage=1 00:36:03.727 --rc genhtml_legend=1 00:36:03.727 --rc geninfo_all_blocks=1 00:36:03.727 --rc geninfo_unexecuted_blocks=1 00:36:03.727 00:36:03.727 ' 00:36:03.727 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:03.986 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:03.987 05:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:05.890 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:05.891 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:05.891 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:05.891 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:05.891 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:05.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:05.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:36:05.891 00:36:05.891 --- 10.0.0.2 ping statistics --- 00:36:05.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.891 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:05.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:05.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:36:05.891 00:36:05.891 --- 10.0.0.1 ping statistics --- 00:36:05.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.891 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=2484261 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 2484261 00:36:05.891 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2484261 ']' 00:36:05.892 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:05.892 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:05.892 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:05.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:05.892 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:05.892 05:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.892 [2024-10-28 05:11:56.408195] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:05.892 [2024-10-28 05:11:56.409289] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:36:05.892 [2024-10-28 05:11:56.409340] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:06.150 [2024-10-28 05:11:56.547091] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:06.150 [2024-10-28 05:11:56.584234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:06.150 [2024-10-28 05:11:56.629542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:06.150 [2024-10-28 05:11:56.629598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:06.150 [2024-10-28 05:11:56.629621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:06.150 [2024-10-28 05:11:56.629631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:06.150 [2024-10-28 05:11:56.629650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:06.150 [2024-10-28 05:11:56.631048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:06.150 [2024-10-28 05:11:56.631110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:06.150 [2024-10-28 05:11:56.631114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:06.150 [2024-10-28 05:11:56.720884] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:06.150 [2024-10-28 05:11:56.721071] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:06.150 [2024-10-28 05:11:56.721079] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:06.150 [2024-10-28 05:11:56.721365] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.085 [2024-10-28 05:11:57.435837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.085 Malloc0 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.085 Delay0 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.085 [2024-10-28 05:11:57.508038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.085 05:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:07.344 [2024-10-28 05:11:57.717738] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:09.249 Initializing NVMe Controllers 00:36:09.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:09.249 controller IO queue size 128 less than required 00:36:09.249 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:09.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:09.249 Initialization complete. Launching workers. 00:36:09.249 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 22960 00:36:09.249 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 23017, failed to submit 66 00:36:09.249 success 22960, unsuccessful 57, failed 0 00:36:09.249 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:09.249 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.249 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.249 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.249 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:09.249 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:09.249 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:09.249 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:09.249 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:09.249 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:09.249 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:09.249 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:09.249 rmmod nvme_tcp 00:36:09.249 rmmod nvme_fabrics 00:36:09.249 rmmod nvme_keyring 00:36:09.507 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:09.507 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:09.507 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:09.507 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 2484261 ']' 00:36:09.507 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 2484261 00:36:09.508 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2484261 ']' 00:36:09.508 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2484261 00:36:09.508 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:36:09.508 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:09.508 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2484261 00:36:09.508 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:09.508 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:09.508 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2484261' 00:36:09.508 killing process with pid 2484261 00:36:09.508 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2484261 00:36:09.508 05:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2484261 00:36:09.766 05:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:09.766 05:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:09.766 05:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:09.766 05:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:09.766 05:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:36:09.766 05:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:09.766 05:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:36:09.766 05:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:09.766 05:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:09.766 05:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.766 05:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:09.766 05:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.668 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:11.668 00:36:11.668 real 0m7.984s 00:36:11.668 user 0m9.634s 00:36:11.668 sys 0m2.861s 00:36:11.668 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:11.668 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.668 ************************************ 00:36:11.668 END TEST nvmf_abort 00:36:11.668 ************************************ 00:36:11.668 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:11.668 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:11.668 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:11.668 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:11.668 ************************************ 00:36:11.668 START TEST nvmf_ns_hotplug_stress 00:36:11.668 ************************************ 00:36:11.668 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:11.926 * Looking for test storage... 00:36:11.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # lcov --version 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:11.926 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:36:11.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.926 --rc genhtml_branch_coverage=1 00:36:11.926 --rc genhtml_function_coverage=1 00:36:11.927 --rc genhtml_legend=1 00:36:11.927 --rc geninfo_all_blocks=1 00:36:11.927 --rc geninfo_unexecuted_blocks=1 00:36:11.927 00:36:11.927 ' 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:36:11.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.927 --rc genhtml_branch_coverage=1 00:36:11.927 --rc genhtml_function_coverage=1 00:36:11.927 --rc genhtml_legend=1 00:36:11.927 --rc geninfo_all_blocks=1 00:36:11.927 --rc geninfo_unexecuted_blocks=1 00:36:11.927 00:36:11.927 ' 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:36:11.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.927 --rc genhtml_branch_coverage=1 00:36:11.927 --rc genhtml_function_coverage=1 00:36:11.927 --rc genhtml_legend=1 00:36:11.927 --rc geninfo_all_blocks=1 00:36:11.927 --rc geninfo_unexecuted_blocks=1 00:36:11.927 00:36:11.927 ' 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:36:11.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.927 --rc genhtml_branch_coverage=1 00:36:11.927 --rc genhtml_function_coverage=1 00:36:11.927 --rc genhtml_legend=1 00:36:11.927 --rc geninfo_all_blocks=1 00:36:11.927 --rc geninfo_unexecuted_blocks=1 00:36:11.927 00:36:11.927 ' 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:11.927 05:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:13.827 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:13.828 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:13.828 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:13.828 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:13.828 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:13.828 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:14.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:14.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:36:14.087 00:36:14.087 --- 10.0.0.2 ping statistics --- 00:36:14.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.087 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:14.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:14.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:36:14.087 00:36:14.087 --- 10.0.0.1 ping statistics --- 00:36:14.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.087 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=2486711 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 2486711 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2486711 ']' 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:14.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:14.087 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:14.087 [2024-10-28 05:12:04.583085] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:14.087 [2024-10-28 05:12:04.584237] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:36:14.087 [2024-10-28 05:12:04.584308] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:14.346 [2024-10-28 05:12:04.723486] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:14.346 [2024-10-28 05:12:04.765859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:14.346 [2024-10-28 05:12:04.814875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:14.346 [2024-10-28 05:12:04.814938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:14.346 [2024-10-28 05:12:04.814964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:14.346 [2024-10-28 05:12:04.814977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:14.346 [2024-10-28 05:12:04.814988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:14.346 [2024-10-28 05:12:04.816515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:14.346 [2024-10-28 05:12:04.817657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:14.346 [2024-10-28 05:12:04.817672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:14.346 [2024-10-28 05:12:04.906943] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:14.346 [2024-10-28 05:12:04.907133] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:14.346 [2024-10-28 05:12:04.907138] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:14.346 [2024-10-28 05:12:04.907415] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:14.346 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:14.346 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:36:14.346 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:14.346 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:14.346 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:14.604 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:14.604 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:14.604 05:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:14.862 [2024-10-28 05:12:05.202377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:14.862 05:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:15.120 05:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:15.379 [2024-10-28 05:12:05.750850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.379 05:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:15.638 05:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:15.897 Malloc0 00:36:15.897 05:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:16.155 Delay0 00:36:16.155 05:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:16.413 05:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:16.671 NULL1 00:36:16.671 05:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:16.929 05:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2487004 00:36:16.929 05:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:16.929 05:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:16.929 05:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:18.304 Read completed with error (sct=0, sc=11) 00:36:18.304 05:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:18.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:18.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:18.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:18.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:18.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:18.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:18.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:18.562 05:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:18.562 05:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:18.820 true 00:36:18.820 05:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:18.820 05:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:19.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:19.753 05:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:19.753 05:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:19.753 05:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:20.011 true 00:36:20.011 05:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:20.011 05:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:20.269 05:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:20.836 05:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:20.836 05:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:20.836 true 00:36:20.836 05:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:20.836 05:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:21.094 05:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:21.660 05:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:21.660 05:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:21.660 true 00:36:21.660 05:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:21.660 05:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:22.597 05:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:22.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:22.855 05:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:22.855 05:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:23.112 true 00:36:23.112 05:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:23.112 05:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:23.369 05:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:23.936 05:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:23.936 05:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:23.936 true 00:36:23.936 05:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:23.936 05:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:24.502 05:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:24.502 05:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:24.502 05:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:24.760 true 00:36:24.760 05:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:24.760 05:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:25.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:25.693 05:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:25.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:25.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:25.952 05:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:25.952 05:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:26.518 true 00:36:26.518 05:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:26.518 05:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:26.519 05:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:26.778 05:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:26.778 05:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:27.036 true 00:36:27.294 05:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:27.294 05:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:27.860 05:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:28.119 05:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:28.119 05:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:28.376 true 00:36:28.634 05:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:28.634 05:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.892 05:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:29.149 05:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:29.149 05:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:29.405 true 00:36:29.405 05:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:29.405 05:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.335 05:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.335 05:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:30.335 05:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:30.639 true 00:36:30.639 05:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:30.639 05:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.968 05:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.225 05:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:31.225 05:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:31.482 true 00:36:31.482 05:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:31.482 05:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.740 05:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.998 05:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:31.998 05:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:32.256 true 00:36:32.256 05:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:32.256 05:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:33.630 05:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.630 05:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:33.630 05:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:33.888 true 00:36:33.888 05:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:33.888 05:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.146 05:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.403 05:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:34.403 05:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:34.661 true 00:36:34.661 05:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:34.661 05:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.227 05:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.227 05:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:35.227 05:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:35.792 true 00:36:35.792 05:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:35.792 05:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:36.725 05:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.726 05:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:36.726 05:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:37.290 true 00:36:37.290 05:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:37.291 05:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.548 05:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.806 05:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:37.806 05:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:38.064 true 00:36:38.064 05:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:38.064 05:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.323 05:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.581 05:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:38.581 05:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:38.839 true 00:36:38.839 05:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:38.839 05:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:39.772 05:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:40.031 05:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:40.031 05:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:40.289 true 00:36:40.289 05:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:40.290 05:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.548 05:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.805 05:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:40.805 05:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:41.063 true 00:36:41.063 05:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:41.063 05:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.322 05:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.579 05:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:41.579 05:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:41.838 true 00:36:41.838 05:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:41.838 05:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:42.771 05:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.028 05:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:43.028 05:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:43.286 true 00:36:43.544 05:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:43.544 05:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.802 05:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.060 05:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:44.060 05:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:44.318 true 00:36:44.318 05:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:44.318 05:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.576 05:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.833 05:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:44.833 05:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:45.091 true 00:36:45.091 05:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:45.091 05:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:46.024 05:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:46.281 05:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:46.281 05:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:46.546 true 00:36:46.546 05:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:46.546 05:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.810 05:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.067 05:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:47.067 05:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:47.325 Initializing NVMe Controllers 00:36:47.325 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:47.325 Controller IO queue size 128, less than required. 00:36:47.325 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:47.325 Controller IO queue size 128, less than required. 00:36:47.325 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:47.325 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:47.325 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:47.325 Initialization complete. Launching workers. 00:36:47.325 ======================================================== 00:36:47.325 Latency(us) 00:36:47.325 Device Information : IOPS MiB/s Average min max 00:36:47.325 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 806.97 0.39 66175.94 2002.53 1029968.14 00:36:47.325 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8813.94 4.30 14523.50 1711.91 365298.60 00:36:47.325 ======================================================== 00:36:47.325 Total : 9620.90 4.70 18855.91 1711.91 1029968.14 00:36:47.325 00:36:47.325 true 00:36:47.325 05:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2487004 00:36:47.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2487004) - No such process 00:36:47.325 05:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2487004 00:36:47.325 05:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.583 05:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:47.841 05:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:47.841 05:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:47.841 05:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:47.841 05:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:47.841 05:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:48.099 null0 00:36:48.099 05:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:48.099 05:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:48.099 05:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:48.358 null1 00:36:48.358 05:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:48.358 05:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:48.358 05:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:48.616 null2 00:36:48.616 05:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:48.616 05:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:48.616 05:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:48.874 null3 00:36:49.133 05:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:49.133 05:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:49.133 05:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:49.389 null4 00:36:49.389 05:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:49.389 05:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:49.389 05:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:49.647 null5 00:36:49.647 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:49.647 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:49.647 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:49.905 null6 00:36:49.905 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:49.905 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:49.905 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:50.164 null7 00:36:50.164 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:50.164 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:50.164 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:50.164 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:50.164 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:50.164 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:50.164 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:50.164 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:50.164 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:50.164 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:50.164 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.164 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:50.164 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:50.164 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:50.164 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2491406 2491407 2491409 2491411 2491414 2491416 2491418 2491420 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.165 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:50.424 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:50.424 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:50.424 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:50.424 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:50.424 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:50.424 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:50.424 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:50.424 05:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.683 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:50.941 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:50.941 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:50.941 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:50.942 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:50.942 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:50.942 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:50.942 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.942 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:51.508 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.508 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.508 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:51.508 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.508 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.508 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:51.508 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.508 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.509 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:51.509 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.509 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.509 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:51.509 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.509 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.509 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.509 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:51.509 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.509 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:51.509 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.509 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.509 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:51.509 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.509 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.509 05:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:51.768 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:51.768 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:51.768 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:51.768 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:51.768 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.768 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:51.768 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:51.768 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.026 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:52.285 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:52.285 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:52.285 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:52.285 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:52.285 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:52.285 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:52.285 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.285 05:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.543 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.544 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:52.544 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.544 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.544 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:52.544 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.544 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.544 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:52.802 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:52.802 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:52.802 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:52.802 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:52.802 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:52.802 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:52.802 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.802 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:53.060 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.061 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:53.628 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:53.628 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:53.628 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:53.628 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:53.628 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:53.628 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.628 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:53.628 05:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:53.628 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.628 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.628 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:53.628 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.628 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.628 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.887 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:54.146 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:54.146 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:54.146 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:54.146 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.146 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:54.146 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:54.146 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:54.146 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.405 05:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:54.664 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:54.664 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:54.664 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:54.664 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:54.664 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:54.664 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.664 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:54.664 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.923 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:55.182 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.182 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:55.182 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:55.182 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:55.182 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:55.182 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:55.182 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:55.182 05:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:55.440 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.440 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.440 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.699 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:55.958 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:55.958 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.958 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:55.958 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:55.958 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:55.958 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:55.958 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:55.958 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:56.216 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.216 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.216 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.216 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.216 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.216 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.216 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.216 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:56.217 rmmod nvme_tcp 00:36:56.217 rmmod nvme_fabrics 00:36:56.217 rmmod nvme_keyring 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 2486711 ']' 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 2486711 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2486711 ']' 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2486711 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2486711 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2486711' 00:36:56.217 killing process with pid 2486711 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2486711 00:36:56.217 05:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2486711 00:36:56.475 05:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:56.475 05:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:56.475 05:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:56.476 05:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:36:56.476 05:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:36:56.476 05:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:56.476 05:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:36:56.476 05:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:56.476 05:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:56.476 05:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:56.476 05:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:56.476 05:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:59.081 00:36:59.081 real 0m46.843s 00:36:59.081 user 3m16.378s 00:36:59.081 sys 0m22.170s 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:59.081 ************************************ 00:36:59.081 END TEST nvmf_ns_hotplug_stress 00:36:59.081 ************************************ 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:59.081 ************************************ 00:36:59.081 START TEST nvmf_delete_subsystem 00:36:59.081 ************************************ 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:59.081 * Looking for test storage... 00:36:59.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # lcov --version 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:36:59.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.081 --rc genhtml_branch_coverage=1 00:36:59.081 --rc genhtml_function_coverage=1 00:36:59.081 --rc genhtml_legend=1 00:36:59.081 --rc geninfo_all_blocks=1 00:36:59.081 --rc geninfo_unexecuted_blocks=1 00:36:59.081 00:36:59.081 ' 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:36:59.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.081 --rc genhtml_branch_coverage=1 00:36:59.081 --rc genhtml_function_coverage=1 00:36:59.081 --rc genhtml_legend=1 00:36:59.081 --rc geninfo_all_blocks=1 00:36:59.081 --rc geninfo_unexecuted_blocks=1 00:36:59.081 00:36:59.081 ' 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:36:59.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.081 --rc genhtml_branch_coverage=1 00:36:59.081 --rc genhtml_function_coverage=1 00:36:59.081 --rc genhtml_legend=1 00:36:59.081 --rc geninfo_all_blocks=1 00:36:59.081 --rc geninfo_unexecuted_blocks=1 00:36:59.081 00:36:59.081 ' 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:36:59.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.081 --rc genhtml_branch_coverage=1 00:36:59.081 --rc genhtml_function_coverage=1 00:36:59.081 --rc genhtml_legend=1 00:36:59.081 --rc geninfo_all_blocks=1 00:36:59.081 --rc geninfo_unexecuted_blocks=1 00:36:59.081 00:36:59.081 ' 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:59.081 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:36:59.082 05:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.985 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:00.985 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:00.985 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:00.985 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:00.985 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:00.986 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:00.986 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:00.986 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:00.986 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:00.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:00.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:37:00.986 00:37:00.986 --- 10.0.0.2 ping statistics --- 00:37:00.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:00.986 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:37:00.986 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:00.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:00.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:37:00.987 00:37:00.987 --- 10.0.0.1 ping statistics --- 00:37:00.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:00.987 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=2494142 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 2494142 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2494142 ']' 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:00.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:00.987 05:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.987 [2024-10-28 05:12:51.425567] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:00.987 [2024-10-28 05:12:51.426629] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:37:00.987 [2024-10-28 05:12:51.426710] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:00.987 [2024-10-28 05:12:51.564011] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:01.246 [2024-10-28 05:12:51.601060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:01.246 [2024-10-28 05:12:51.648153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:01.246 [2024-10-28 05:12:51.648203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:01.246 [2024-10-28 05:12:51.648229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:01.246 [2024-10-28 05:12:51.648240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:01.246 [2024-10-28 05:12:51.648251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:01.246 [2024-10-28 05:12:51.649672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:01.246 [2024-10-28 05:12:51.649679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:01.246 [2024-10-28 05:12:51.743716] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:01.246 [2024-10-28 05:12:51.744064] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:01.246 [2024-10-28 05:12:51.744257] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:02.181 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.182 [2024-10-28 05:12:52.470387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.182 [2024-10-28 05:12:52.490588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.182 NULL1 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.182 Delay0 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2494290 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:02.182 05:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:02.182 [2024-10-28 05:12:52.672672] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:04.077 05:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:04.077 05:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.077 05:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 starting I/O failed: -6 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 starting I/O failed: -6 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 starting I/O failed: -6 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 starting I/O failed: -6 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 starting I/O failed: -6 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 starting I/O failed: -6 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 starting I/O failed: -6 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 starting I/O failed: -6 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 starting I/O failed: -6 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 starting I/O failed: -6 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 [2024-10-28 05:12:54.708258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3eb0000c00 is same with the state(6) to be set 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Read completed with error (sct=0, sc=8) 00:37:04.335 Write completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 [2024-10-28 05:12:54.709388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3eb000cfe0 is same with the state(6) to be set 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 starting I/O failed: -6 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 starting I/O failed: -6 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 starting I/O failed: -6 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 starting I/O failed: -6 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 starting I/O failed: -6 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 starting I/O failed: -6 00:37:04.336 [2024-10-28 05:12:54.709883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3eb000d640 is same with the state(6) to be set 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 starting I/O failed: -6 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 starting I/O failed: -6 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 starting I/O failed: -6 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 starting I/O failed: -6 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 starting I/O failed: -6 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 starting I/O failed: -6 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 starting I/O failed: -6 00:37:04.336 Read completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 Write completed with error (sct=0, sc=8) 00:37:04.336 starting I/O failed: -6 00:37:04.336 starting I/O failed: -6 00:37:05.271 [2024-10-28 05:12:55.688331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8bda0 is same with the state(6) to be set 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 [2024-10-28 05:12:55.709182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3eb000d310 is same with the state(6) to be set 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 [2024-10-28 05:12:55.709683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8fb20 is same with the state(6) to be set 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 [2024-10-28 05:12:55.709931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8dad0 is same with the state(6) to be set 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Write completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 Read completed with error (sct=0, sc=8) 00:37:05.271 [2024-10-28 05:12:55.710156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8d8f0 is same with the state(6) to be set 00:37:05.271 Initializing NVMe Controllers 00:37:05.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:05.271 Controller IO queue size 128, less than required. 00:37:05.271 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:05.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:05.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:05.271 Initialization complete. Launching workers. 00:37:05.271 ======================================================== 00:37:05.271 Latency(us) 00:37:05.271 Device Information : IOPS MiB/s Average min max 00:37:05.271 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 193.91 0.09 948979.01 738.61 1013074.09 00:37:05.271 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.75 0.07 886288.52 953.12 1013743.48 00:37:05.271 ======================================================== 00:37:05.271 Total : 346.66 0.17 921355.73 738.61 1013743.48 00:37:05.271 00:37:05.271 05:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.271 05:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:05.271 [2024-10-28 05:12:55.710976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8bda0 (9): Bad file descriptor 00:37:05.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:05.271 05:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2494290 00:37:05.271 05:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2494290 00:37:05.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2494290) - No such process 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2494290 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2494290 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2494290 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:05.838 [2024-10-28 05:12:56.230540] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2494732 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2494732 00:37:05.838 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:05.838 [2024-10-28 05:12:56.400007] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:06.403 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:06.403 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2494732 00:37:06.403 05:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:06.661 05:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:06.661 05:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2494732 00:37:06.661 05:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:07.225 05:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:07.225 05:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2494732 00:37:07.225 05:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:07.790 05:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:07.790 05:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2494732 00:37:07.790 05:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:08.355 05:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:08.355 05:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2494732 00:37:08.355 05:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:08.921 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:08.921 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2494732 00:37:08.921 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:09.190 Initializing NVMe Controllers 00:37:09.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:09.190 Controller IO queue size 128, less than required. 00:37:09.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:09.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:09.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:09.190 Initialization complete. Launching workers. 00:37:09.190 ======================================================== 00:37:09.190 Latency(us) 00:37:09.190 Device Information : IOPS MiB/s Average min max 00:37:09.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003450.28 1000075.54 1010993.85 00:37:09.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005279.08 1000133.88 1041300.06 00:37:09.190 ======================================================== 00:37:09.190 Total : 256.00 0.12 1004364.68 1000075.54 1041300.06 00:37:09.190 00:37:09.190 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:09.190 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2494732 00:37:09.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2494732) - No such process 00:37:09.190 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2494732 00:37:09.190 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:09.190 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:09.190 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:09.190 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:09.190 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:09.190 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:09.190 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:09.190 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:09.190 rmmod nvme_tcp 00:37:09.449 rmmod nvme_fabrics 00:37:09.449 rmmod nvme_keyring 00:37:09.449 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:09.449 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:09.449 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:09.449 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 2494142 ']' 00:37:09.449 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 2494142 00:37:09.449 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2494142 ']' 00:37:09.449 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2494142 00:37:09.449 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:37:09.449 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:09.449 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2494142 00:37:09.449 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:09.449 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:09.449 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2494142' 00:37:09.449 killing process with pid 2494142 00:37:09.449 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2494142 00:37:09.449 05:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2494142 00:37:09.449 05:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:09.707 05:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:09.707 05:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:09.707 05:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:09.707 05:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:37:09.707 05:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:09.707 05:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:37:09.707 05:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:09.707 05:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:09.707 05:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:09.707 05:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:09.707 05:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:11.613 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:11.613 00:37:11.613 real 0m12.983s 00:37:11.613 user 0m24.902s 00:37:11.613 sys 0m3.541s 00:37:11.613 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:11.613 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:11.613 ************************************ 00:37:11.613 END TEST nvmf_delete_subsystem 00:37:11.613 ************************************ 00:37:11.613 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:11.613 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:11.613 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:11.613 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:11.613 ************************************ 00:37:11.613 START TEST nvmf_host_management 00:37:11.613 ************************************ 00:37:11.613 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:11.613 * Looking for test storage... 00:37:11.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:11.613 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:37:11.613 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1689 -- # lcov --version 00:37:11.613 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:11.873 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:37:11.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.874 --rc genhtml_branch_coverage=1 00:37:11.874 --rc genhtml_function_coverage=1 00:37:11.874 --rc genhtml_legend=1 00:37:11.874 --rc geninfo_all_blocks=1 00:37:11.874 --rc geninfo_unexecuted_blocks=1 00:37:11.874 00:37:11.874 ' 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:37:11.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.874 --rc genhtml_branch_coverage=1 00:37:11.874 --rc genhtml_function_coverage=1 00:37:11.874 --rc genhtml_legend=1 00:37:11.874 --rc geninfo_all_blocks=1 00:37:11.874 --rc geninfo_unexecuted_blocks=1 00:37:11.874 00:37:11.874 ' 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:37:11.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.874 --rc genhtml_branch_coverage=1 00:37:11.874 --rc genhtml_function_coverage=1 00:37:11.874 --rc genhtml_legend=1 00:37:11.874 --rc geninfo_all_blocks=1 00:37:11.874 --rc geninfo_unexecuted_blocks=1 00:37:11.874 00:37:11.874 ' 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:37:11.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.874 --rc genhtml_branch_coverage=1 00:37:11.874 --rc genhtml_function_coverage=1 00:37:11.874 --rc genhtml_legend=1 00:37:11.874 --rc geninfo_all_blocks=1 00:37:11.874 --rc geninfo_unexecuted_blocks=1 00:37:11.874 00:37:11.874 ' 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:11.874 05:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:13.778 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:13.778 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:13.778 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:13.778 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:13.778 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:13.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:13.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:37:13.779 00:37:13.779 --- 10.0.0.2 ping statistics --- 00:37:13.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:13.779 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:13.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:13.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:37:13.779 00:37:13.779 --- 10.0.0.1 ping statistics --- 00:37:13.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:13.779 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:13.779 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:14.038 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:14.038 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:14.038 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:14.038 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:14.038 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:14.038 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:14.038 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=2497108 00:37:14.038 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:14.038 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 2497108 00:37:14.038 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2497108 ']' 00:37:14.038 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.038 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:14.038 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.038 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:14.038 05:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:14.038 [2024-10-28 05:13:04.432158] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:14.038 [2024-10-28 05:13:04.433284] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:37:14.038 [2024-10-28 05:13:04.433354] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:14.038 [2024-10-28 05:13:04.575415] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:14.038 [2024-10-28 05:13:04.612539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:14.297 [2024-10-28 05:13:04.661303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:14.298 [2024-10-28 05:13:04.661360] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:14.298 [2024-10-28 05:13:04.661383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:14.298 [2024-10-28 05:13:04.661394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:14.298 [2024-10-28 05:13:04.661404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:14.298 [2024-10-28 05:13:04.663004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:14.298 [2024-10-28 05:13:04.663121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:14.298 [2024-10-28 05:13:04.663184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:14.298 [2024-10-28 05:13:04.663187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.298 [2024-10-28 05:13:04.752666] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:14.298 [2024-10-28 05:13:04.752843] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:14.298 [2024-10-28 05:13:04.753168] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:14.298 [2024-10-28 05:13:04.753762] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:14.298 [2024-10-28 05:13:04.754043] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:14.864 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:14.864 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:37:14.864 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:14.864 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:14.864 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:15.123 [2024-10-28 05:13:05.479881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:15.123 Malloc0 00:37:15.123 [2024-10-28 05:13:05.560043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2497283 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2497283 /var/tmp/bdevperf.sock 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2497283 ']' 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:15.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:15.123 { 00:37:15.123 "params": { 00:37:15.123 "name": "Nvme$subsystem", 00:37:15.123 "trtype": "$TEST_TRANSPORT", 00:37:15.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:15.123 "adrfam": "ipv4", 00:37:15.123 "trsvcid": "$NVMF_PORT", 00:37:15.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:15.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:15.123 "hdgst": ${hdgst:-false}, 00:37:15.123 "ddgst": ${ddgst:-false} 00:37:15.123 }, 00:37:15.123 "method": "bdev_nvme_attach_controller" 00:37:15.123 } 00:37:15.123 EOF 00:37:15.123 )") 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:37:15.123 05:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:15.123 "params": { 00:37:15.123 "name": "Nvme0", 00:37:15.123 "trtype": "tcp", 00:37:15.123 "traddr": "10.0.0.2", 00:37:15.123 "adrfam": "ipv4", 00:37:15.123 "trsvcid": "4420", 00:37:15.123 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:15.123 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:15.123 "hdgst": false, 00:37:15.123 "ddgst": false 00:37:15.123 }, 00:37:15.123 "method": "bdev_nvme_attach_controller" 00:37:15.123 }' 00:37:15.123 [2024-10-28 05:13:05.645071] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:37:15.123 [2024-10-28 05:13:05.645160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497283 ] 00:37:15.382 [2024-10-28 05:13:05.777601] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:15.382 [2024-10-28 05:13:05.815372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:15.382 [2024-10-28 05:13:05.861695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:15.641 Running I/O for 10 seconds... 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.210 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:16.210 [2024-10-28 05:13:06.691899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.691974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.210 [2024-10-28 05:13:06.692786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.211 [2024-10-28 05:13:06.692798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.211 [2024-10-28 05:13:06.692810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.211 [2024-10-28 05:13:06.692823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.211 [2024-10-28 05:13:06.692835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.211 [2024-10-28 05:13:06.692847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.211 [2024-10-28 05:13:06.692859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.211 [2024-10-28 05:13:06.692873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.211 [2024-10-28 05:13:06.692886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.211 [2024-10-28 05:13:06.692899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.211 [2024-10-28 05:13:06.692911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.211 [2024-10-28 05:13:06.692923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac0f0 is same with the state(6) to be set 00:37:16.211 [2024-10-28 05:13:06.693078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.693969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.693983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.694002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.694016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.694032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.694046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.694061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.694074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.694089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.694103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.694118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.694131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.694146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.694160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.694175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.694189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.694203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.694216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.211 [2024-10-28 05:13:06.694231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.211 [2024-10-28 05:13:06.694244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.694984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.694998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.695013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.695026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.695041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.695054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.695069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.695082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.695096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.212 [2024-10-28 05:13:06.695110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.212 [2024-10-28 05:13:06.695123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b9960 is same with the state(6) to be set 00:37:16.212 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.212 [2024-10-28 05:13:06.696360] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:16.212 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:16.212 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.212 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:16.212 task offset: 114688 on job bdev=Nvme0n1 fails 00:37:16.212 00:37:16.212 Latency(us) 00:37:16.212 [2024-10-28T04:13:06.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:16.212 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:16.212 Job: Nvme0n1 ended in about 0.62 seconds with error 00:37:16.212 Verification LBA range: start 0x0 length 0x400 00:37:16.212 Nvme0n1 : 0.62 1455.89 90.99 103.99 0.00 40187.00 8515.91 34063.63 00:37:16.212 [2024-10-28T04:13:06.808Z] =================================================================================================================== 00:37:16.212 [2024-10-28T04:13:06.808Z] Total : 1455.89 90.99 103.99 0.00 40187.00 8515.91 34063.63 00:37:16.212 [2024-10-28 05:13:06.698501] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:16.212 [2024-10-28 05:13:06.698542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0580 (9): Bad file descriptor 00:37:16.212 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.212 05:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:16.471 [2024-10-28 05:13:06.830811] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:37:17.405 05:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2497283 00:37:17.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2497283) - No such process 00:37:17.405 05:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:17.405 05:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:17.405 05:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:17.405 05:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:17.405 05:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:37:17.405 05:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:37:17.405 05:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:17.405 05:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:17.405 { 00:37:17.405 "params": { 00:37:17.405 "name": "Nvme$subsystem", 00:37:17.405 "trtype": "$TEST_TRANSPORT", 00:37:17.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:17.405 "adrfam": "ipv4", 00:37:17.405 "trsvcid": "$NVMF_PORT", 00:37:17.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:17.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:17.405 "hdgst": ${hdgst:-false}, 00:37:17.405 "ddgst": ${ddgst:-false} 00:37:17.405 }, 00:37:17.405 "method": "bdev_nvme_attach_controller" 00:37:17.405 } 00:37:17.405 EOF 00:37:17.405 )") 00:37:17.405 05:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:37:17.405 05:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:37:17.405 05:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:37:17.405 05:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:17.405 "params": { 00:37:17.405 "name": "Nvme0", 00:37:17.405 "trtype": "tcp", 00:37:17.405 "traddr": "10.0.0.2", 00:37:17.405 "adrfam": "ipv4", 00:37:17.405 "trsvcid": "4420", 00:37:17.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:17.405 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:17.405 "hdgst": false, 00:37:17.405 "ddgst": false 00:37:17.405 }, 00:37:17.405 "method": "bdev_nvme_attach_controller" 00:37:17.405 }' 00:37:17.405 [2024-10-28 05:13:07.755435] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:37:17.405 [2024-10-28 05:13:07.755515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497551 ] 00:37:17.405 [2024-10-28 05:13:07.887691] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:17.405 [2024-10-28 05:13:07.925423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.405 [2024-10-28 05:13:07.971841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:17.668 Running I/O for 1 seconds... 00:37:18.606 1472.00 IOPS, 92.00 MiB/s 00:37:18.606 Latency(us) 00:37:18.606 [2024-10-28T04:13:09.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:18.606 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:18.606 Verification LBA range: start 0x0 length 0x400 00:37:18.606 Nvme0n1 : 1.01 1516.44 94.78 0.00 0.00 41550.44 7445.34 36594.08 00:37:18.606 [2024-10-28T04:13:09.202Z] =================================================================================================================== 00:37:18.606 [2024-10-28T04:13:09.202Z] Total : 1516.44 94.78 0.00 0.00 41550.44 7445.34 36594.08 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:18.865 rmmod nvme_tcp 00:37:18.865 rmmod nvme_fabrics 00:37:18.865 rmmod nvme_keyring 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 2497108 ']' 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 2497108 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2497108 ']' 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2497108 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2497108 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2497108' 00:37:18.865 killing process with pid 2497108 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2497108 00:37:18.865 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2497108 00:37:19.124 [2024-10-28 05:13:09.653729] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:19.124 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:19.124 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:19.124 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:19.124 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:19.124 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:37:19.124 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:19.124 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:37:19.124 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:19.124 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:19.124 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:19.124 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:19.124 05:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:21.657 00:37:21.657 real 0m9.605s 00:37:21.657 user 0m18.406s 00:37:21.657 sys 0m3.870s 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:21.657 ************************************ 00:37:21.657 END TEST nvmf_host_management 00:37:21.657 ************************************ 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:21.657 ************************************ 00:37:21.657 START TEST nvmf_lvol 00:37:21.657 ************************************ 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:21.657 * Looking for test storage... 00:37:21.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1689 -- # lcov --version 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:37:21.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.657 --rc genhtml_branch_coverage=1 00:37:21.657 --rc genhtml_function_coverage=1 00:37:21.657 --rc genhtml_legend=1 00:37:21.657 --rc geninfo_all_blocks=1 00:37:21.657 --rc geninfo_unexecuted_blocks=1 00:37:21.657 00:37:21.657 ' 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:37:21.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.657 --rc genhtml_branch_coverage=1 00:37:21.657 --rc genhtml_function_coverage=1 00:37:21.657 --rc genhtml_legend=1 00:37:21.657 --rc geninfo_all_blocks=1 00:37:21.657 --rc geninfo_unexecuted_blocks=1 00:37:21.657 00:37:21.657 ' 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:37:21.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.657 --rc genhtml_branch_coverage=1 00:37:21.657 --rc genhtml_function_coverage=1 00:37:21.657 --rc genhtml_legend=1 00:37:21.657 --rc geninfo_all_blocks=1 00:37:21.657 --rc geninfo_unexecuted_blocks=1 00:37:21.657 00:37:21.657 ' 00:37:21.657 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:37:21.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.657 --rc genhtml_branch_coverage=1 00:37:21.657 --rc genhtml_function_coverage=1 00:37:21.657 --rc genhtml_legend=1 00:37:21.657 --rc geninfo_all_blocks=1 00:37:21.657 --rc geninfo_unexecuted_blocks=1 00:37:21.657 00:37:21.657 ' 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:21.658 05:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:23.557 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:23.557 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:23.558 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:23.558 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:23.558 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:23.558 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:23.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:23.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:37:23.817 00:37:23.817 --- 10.0.0.2 ping statistics --- 00:37:23.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.817 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:23.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:23.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:37:23.817 00:37:23.817 --- 10.0.0.1 ping statistics --- 00:37:23.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.817 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=2499609 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 2499609 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2499609 ']' 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:23.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:23.817 05:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:23.817 [2024-10-28 05:13:14.255199] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:23.817 [2024-10-28 05:13:14.256356] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:37:23.817 [2024-10-28 05:13:14.256432] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:23.817 [2024-10-28 05:13:14.395926] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:24.076 [2024-10-28 05:13:14.439447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:24.076 [2024-10-28 05:13:14.490797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:24.076 [2024-10-28 05:13:14.490861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:24.076 [2024-10-28 05:13:14.490878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:24.076 [2024-10-28 05:13:14.490892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:24.076 [2024-10-28 05:13:14.490903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:24.076 [2024-10-28 05:13:14.492466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.076 [2024-10-28 05:13:14.492546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:24.076 [2024-10-28 05:13:14.492549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.076 [2024-10-28 05:13:14.592977] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:24.076 [2024-10-28 05:13:14.593185] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:24.076 [2024-10-28 05:13:14.593215] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:24.076 [2024-10-28 05:13:14.593491] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:25.010 05:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:25.010 05:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:37:25.010 05:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:25.010 05:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:25.010 05:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:25.010 05:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:25.010 05:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:25.010 [2024-10-28 05:13:15.533325] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:25.010 05:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:25.269 05:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:25.269 05:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:25.836 05:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:25.836 05:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:25.836 05:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:26.403 05:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f0e57770-8f16-452e-b778-d8e81fb73f43 00:37:26.403 05:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f0e57770-8f16-452e-b778-d8e81fb73f43 lvol 20 00:37:26.403 05:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7ffabaea-0699-4a60-8a55-2b7c51dc91d8 00:37:26.403 05:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:26.970 05:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7ffabaea-0699-4a60-8a55-2b7c51dc91d8 00:37:26.970 05:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:27.228 [2024-10-28 05:13:17.805527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:27.486 05:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:27.744 05:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2500149 00:37:27.744 05:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:27.744 05:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:28.680 05:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7ffabaea-0699-4a60-8a55-2b7c51dc91d8 MY_SNAPSHOT 00:37:28.938 05:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4475c342-4b54-415f-9adc-cd479059afc6 00:37:28.938 05:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7ffabaea-0699-4a60-8a55-2b7c51dc91d8 30 00:37:29.203 05:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4475c342-4b54-415f-9adc-cd479059afc6 MY_CLONE 00:37:29.465 05:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b4c6dba9-f05e-4e7e-b100-7629739a2246 00:37:29.465 05:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b4c6dba9-f05e-4e7e-b100-7629739a2246 00:37:30.399 05:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2500149 00:37:38.568 Initializing NVMe Controllers 00:37:38.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:38.568 Controller IO queue size 128, less than required. 00:37:38.568 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:38.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:38.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:38.568 Initialization complete. Launching workers. 00:37:38.568 ======================================================== 00:37:38.568 Latency(us) 00:37:38.568 Device Information : IOPS MiB/s Average min max 00:37:38.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10532.48 41.14 12156.71 1204.59 82645.69 00:37:38.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10346.68 40.42 12378.01 5137.98 80049.19 00:37:38.568 ======================================================== 00:37:38.568 Total : 20879.17 81.56 12266.37 1204.59 82645.69 00:37:38.568 00:37:38.568 05:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:38.568 05:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7ffabaea-0699-4a60-8a55-2b7c51dc91d8 00:37:38.568 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f0e57770-8f16-452e-b778-d8e81fb73f43 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:38.827 rmmod nvme_tcp 00:37:38.827 rmmod nvme_fabrics 00:37:38.827 rmmod nvme_keyring 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 2499609 ']' 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 2499609 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2499609 ']' 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2499609 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:38.827 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2499609 00:37:39.085 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:39.085 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:39.085 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2499609' 00:37:39.085 killing process with pid 2499609 00:37:39.085 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2499609 00:37:39.085 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2499609 00:37:39.344 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:39.344 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:39.344 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:39.344 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:39.344 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:37:39.344 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:39.344 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:37:39.344 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:39.344 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:39.344 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:39.344 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:39.344 05:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:41.248 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:41.249 00:37:41.249 real 0m19.958s 00:37:41.249 user 0m55.248s 00:37:41.249 sys 0m8.329s 00:37:41.249 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:41.249 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:41.249 ************************************ 00:37:41.249 END TEST nvmf_lvol 00:37:41.249 ************************************ 00:37:41.249 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:41.249 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:41.249 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:41.249 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:41.249 ************************************ 00:37:41.249 START TEST nvmf_lvs_grow 00:37:41.249 ************************************ 00:37:41.249 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:41.508 * Looking for test storage... 00:37:41.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # lcov --version 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:37:41.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.508 --rc genhtml_branch_coverage=1 00:37:41.508 --rc genhtml_function_coverage=1 00:37:41.508 --rc genhtml_legend=1 00:37:41.508 --rc geninfo_all_blocks=1 00:37:41.508 --rc geninfo_unexecuted_blocks=1 00:37:41.508 00:37:41.508 ' 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:37:41.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.508 --rc genhtml_branch_coverage=1 00:37:41.508 --rc genhtml_function_coverage=1 00:37:41.508 --rc genhtml_legend=1 00:37:41.508 --rc geninfo_all_blocks=1 00:37:41.508 --rc geninfo_unexecuted_blocks=1 00:37:41.508 00:37:41.508 ' 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:37:41.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.508 --rc genhtml_branch_coverage=1 00:37:41.508 --rc genhtml_function_coverage=1 00:37:41.508 --rc genhtml_legend=1 00:37:41.508 --rc geninfo_all_blocks=1 00:37:41.508 --rc geninfo_unexecuted_blocks=1 00:37:41.508 00:37:41.508 ' 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:37:41.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:41.508 --rc genhtml_branch_coverage=1 00:37:41.508 --rc genhtml_function_coverage=1 00:37:41.508 --rc genhtml_legend=1 00:37:41.508 --rc geninfo_all_blocks=1 00:37:41.508 --rc geninfo_unexecuted_blocks=1 00:37:41.508 00:37:41.508 ' 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:41.508 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:41.509 05:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:43.410 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:43.410 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:43.410 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:43.411 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:43.411 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:43.411 05:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:43.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:43.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:37:43.670 00:37:43.670 --- 10.0.0.2 ping statistics --- 00:37:43.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.670 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:43.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:43.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:37:43.670 00:37:43.670 --- 10.0.0.1 ping statistics --- 00:37:43.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.670 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=2503356 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 2503356 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2503356 ']' 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:43.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:43.670 05:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:43.670 [2024-10-28 05:13:34.173517] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:43.670 [2024-10-28 05:13:34.174768] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:37:43.670 [2024-10-28 05:13:34.174825] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:43.929 [2024-10-28 05:13:34.318577] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:43.929 [2024-10-28 05:13:34.359013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:43.929 [2024-10-28 05:13:34.406530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:43.929 [2024-10-28 05:13:34.406610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:43.929 [2024-10-28 05:13:34.406627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:43.929 [2024-10-28 05:13:34.406650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:43.929 [2024-10-28 05:13:34.406662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:43.929 [2024-10-28 05:13:34.407306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:43.929 [2024-10-28 05:13:34.504589] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:43.929 [2024-10-28 05:13:34.504962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:44.863 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:44.864 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:37:44.864 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:44.864 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:44.864 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:44.864 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:44.864 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:44.864 [2024-10-28 05:13:35.439898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:44.864 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:44.864 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:44.864 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:44.864 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:45.122 ************************************ 00:37:45.122 START TEST lvs_grow_clean 00:37:45.122 ************************************ 00:37:45.122 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:37:45.122 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:45.122 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:45.122 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:45.122 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:45.122 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:45.122 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:45.122 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:45.122 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:45.122 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:45.380 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:45.380 05:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:45.638 05:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=695f5f9a-a2e3-4a56-82c0-9d4b1ed7720e 00:37:45.638 05:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 695f5f9a-a2e3-4a56-82c0-9d4b1ed7720e 00:37:45.638 05:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:45.897 05:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:45.897 05:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:45.897 05:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 695f5f9a-a2e3-4a56-82c0-9d4b1ed7720e lvol 150 00:37:46.155 05:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=426c6f28-1ecf-4eca-b0b5-81f40ac1f51c 00:37:46.155 05:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:46.155 05:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:46.414 [2024-10-28 05:13:36.899806] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:46.414 [2024-10-28 05:13:36.899910] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:46.414 true 00:37:46.414 05:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 695f5f9a-a2e3-4a56-82c0-9d4b1ed7720e 00:37:46.414 05:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:46.672 05:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:46.672 05:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:46.937 05:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 426c6f28-1ecf-4eca-b0b5-81f40ac1f51c 00:37:47.194 05:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:47.451 [2024-10-28 05:13:38.032124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:47.709 05:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:47.967 05:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2503855 00:37:47.967 05:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:47.967 05:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:47.967 05:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2503855 /var/tmp/bdevperf.sock 00:37:47.967 05:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2503855 ']' 00:37:47.967 05:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:47.967 05:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:47.967 05:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:47.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:47.967 05:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:47.967 05:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:47.967 [2024-10-28 05:13:38.408029] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:37:47.967 [2024-10-28 05:13:38.408121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2503855 ] 00:37:47.967 [2024-10-28 05:13:38.540619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:48.225 [2024-10-28 05:13:38.582070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.225 [2024-10-28 05:13:38.634171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:48.225 05:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:48.225 05:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:37:48.225 05:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:48.791 Nvme0n1 00:37:48.791 05:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:49.050 [ 00:37:49.050 { 00:37:49.050 "name": "Nvme0n1", 00:37:49.050 "aliases": [ 00:37:49.050 "426c6f28-1ecf-4eca-b0b5-81f40ac1f51c" 00:37:49.050 ], 00:37:49.050 "product_name": "NVMe disk", 00:37:49.050 "block_size": 4096, 00:37:49.050 "num_blocks": 38912, 00:37:49.050 "uuid": "426c6f28-1ecf-4eca-b0b5-81f40ac1f51c", 00:37:49.050 "numa_id": 0, 00:37:49.050 "assigned_rate_limits": { 00:37:49.050 "rw_ios_per_sec": 0, 00:37:49.050 "rw_mbytes_per_sec": 0, 00:37:49.050 "r_mbytes_per_sec": 0, 00:37:49.050 "w_mbytes_per_sec": 0 00:37:49.050 }, 00:37:49.050 "claimed": false, 00:37:49.050 "zoned": false, 00:37:49.050 "supported_io_types": { 00:37:49.050 "read": true, 00:37:49.050 "write": true, 00:37:49.050 "unmap": true, 00:37:49.050 "flush": true, 00:37:49.050 "reset": true, 00:37:49.050 "nvme_admin": true, 00:37:49.050 "nvme_io": true, 00:37:49.050 "nvme_io_md": false, 00:37:49.050 "write_zeroes": true, 00:37:49.050 "zcopy": false, 00:37:49.050 "get_zone_info": false, 00:37:49.050 "zone_management": false, 00:37:49.050 "zone_append": false, 00:37:49.050 "compare": true, 00:37:49.050 "compare_and_write": true, 00:37:49.050 "abort": true, 00:37:49.050 "seek_hole": false, 00:37:49.050 "seek_data": false, 00:37:49.050 "copy": true, 00:37:49.050 "nvme_iov_md": false 00:37:49.050 }, 00:37:49.050 "memory_domains": [ 00:37:49.050 { 00:37:49.050 "dma_device_id": "system", 00:37:49.050 "dma_device_type": 1 00:37:49.050 } 00:37:49.050 ], 00:37:49.050 "driver_specific": { 00:37:49.050 "nvme": [ 00:37:49.050 { 00:37:49.050 "trid": { 00:37:49.050 "trtype": "TCP", 00:37:49.050 "adrfam": "IPv4", 00:37:49.050 "traddr": "10.0.0.2", 00:37:49.050 "trsvcid": "4420", 00:37:49.050 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:49.050 }, 00:37:49.050 "ctrlr_data": { 00:37:49.050 "cntlid": 1, 00:37:49.050 "vendor_id": "0x8086", 00:37:49.050 "model_number": "SPDK bdev Controller", 00:37:49.050 "serial_number": "SPDK0", 00:37:49.050 "firmware_revision": "25.01", 00:37:49.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:49.050 "oacs": { 00:37:49.050 "security": 0, 00:37:49.050 "format": 0, 00:37:49.050 "firmware": 0, 00:37:49.050 "ns_manage": 0 00:37:49.050 }, 00:37:49.050 "multi_ctrlr": true, 00:37:49.050 "ana_reporting": false 00:37:49.050 }, 00:37:49.050 "vs": { 00:37:49.050 "nvme_version": "1.3" 00:37:49.050 }, 00:37:49.050 "ns_data": { 00:37:49.050 "id": 1, 00:37:49.050 "can_share": true 00:37:49.050 } 00:37:49.050 } 00:37:49.050 ], 00:37:49.050 "mp_policy": "active_passive" 00:37:49.050 } 00:37:49.050 } 00:37:49.050 ] 00:37:49.050 05:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2503988 00:37:49.050 05:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:49.050 05:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:49.050 Running I/O for 10 seconds... 00:37:50.427 Latency(us) 00:37:50.427 [2024-10-28T04:13:41.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:50.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:50.427 Nvme0n1 : 1.00 13255.00 51.78 0.00 0.00 0.00 0.00 0.00 00:37:50.427 [2024-10-28T04:13:41.023Z] =================================================================================================================== 00:37:50.427 [2024-10-28T04:13:41.023Z] Total : 13255.00 51.78 0.00 0.00 0.00 0.00 0.00 00:37:50.427 00:37:50.994 05:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 695f5f9a-a2e3-4a56-82c0-9d4b1ed7720e 00:37:51.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:51.253 Nvme0n1 : 2.00 13505.00 52.75 0.00 0.00 0.00 0.00 0.00 00:37:51.253 [2024-10-28T04:13:41.849Z] =================================================================================================================== 00:37:51.253 [2024-10-28T04:13:41.849Z] Total : 13505.00 52.75 0.00 0.00 0.00 0.00 0.00 00:37:51.253 00:37:51.253 true 00:37:51.253 05:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 695f5f9a-a2e3-4a56-82c0-9d4b1ed7720e 00:37:51.253 05:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:51.512 05:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:51.512 05:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:51.512 05:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2503988 00:37:52.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:52.079 Nvme0n1 : 3.00 13842.33 54.07 0.00 0.00 0.00 0.00 0.00 00:37:52.079 [2024-10-28T04:13:42.675Z] =================================================================================================================== 00:37:52.079 [2024-10-28T04:13:42.675Z] Total : 13842.33 54.07 0.00 0.00 0.00 0.00 0.00 00:37:52.079 00:37:53.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:53.016 Nvme0n1 : 4.00 13980.00 54.61 0.00 0.00 0.00 0.00 0.00 00:37:53.016 [2024-10-28T04:13:43.612Z] =================================================================================================================== 00:37:53.016 [2024-10-28T04:13:43.612Z] Total : 13980.00 54.61 0.00 0.00 0.00 0.00 0.00 00:37:53.016 00:37:54.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:54.393 Nvme0n1 : 5.00 13987.00 54.64 0.00 0.00 0.00 0.00 0.00 00:37:54.393 [2024-10-28T04:13:44.989Z] =================================================================================================================== 00:37:54.393 [2024-10-28T04:13:44.989Z] Total : 13987.00 54.64 0.00 0.00 0.00 0.00 0.00 00:37:54.393 00:37:55.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:55.328 Nvme0n1 : 6.00 14002.00 54.70 0.00 0.00 0.00 0.00 0.00 00:37:55.328 [2024-10-28T04:13:45.924Z] =================================================================================================================== 00:37:55.328 [2024-10-28T04:13:45.924Z] Total : 14002.00 54.70 0.00 0.00 0.00 0.00 0.00 00:37:55.328 00:37:56.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:56.264 Nvme0n1 : 7.00 14012.71 54.74 0.00 0.00 0.00 0.00 0.00 00:37:56.264 [2024-10-28T04:13:46.860Z] =================================================================================================================== 00:37:56.264 [2024-10-28T04:13:46.860Z] Total : 14012.71 54.74 0.00 0.00 0.00 0.00 0.00 00:37:56.264 00:37:57.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:57.197 Nvme0n1 : 8.00 14028.50 54.80 0.00 0.00 0.00 0.00 0.00 00:37:57.197 [2024-10-28T04:13:47.793Z] =================================================================================================================== 00:37:57.197 [2024-10-28T04:13:47.793Z] Total : 14028.50 54.80 0.00 0.00 0.00 0.00 0.00 00:37:57.197 00:37:58.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:58.132 Nvme0n1 : 9.00 14040.67 54.85 0.00 0.00 0.00 0.00 0.00 00:37:58.132 [2024-10-28T04:13:48.728Z] =================================================================================================================== 00:37:58.132 [2024-10-28T04:13:48.728Z] Total : 14040.67 54.85 0.00 0.00 0.00 0.00 0.00 00:37:58.132 00:37:59.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:59.068 Nvme0n1 : 10.00 14056.60 54.91 0.00 0.00 0.00 0.00 0.00 00:37:59.068 [2024-10-28T04:13:49.664Z] =================================================================================================================== 00:37:59.068 [2024-10-28T04:13:49.664Z] Total : 14056.60 54.91 0.00 0.00 0.00 0.00 0.00 00:37:59.068 00:37:59.068 00:37:59.068 Latency(us) 00:37:59.068 [2024-10-28T04:13:49.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:59.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:59.068 Nvme0n1 : 10.00 14058.64 54.92 0.00 0.00 9098.90 2518.28 18005.06 00:37:59.068 [2024-10-28T04:13:49.664Z] =================================================================================================================== 00:37:59.068 [2024-10-28T04:13:49.664Z] Total : 14058.64 54.92 0.00 0.00 9098.90 2518.28 18005.06 00:37:59.068 { 00:37:59.068 "results": [ 00:37:59.068 { 00:37:59.068 "job": "Nvme0n1", 00:37:59.068 "core_mask": "0x2", 00:37:59.068 "workload": "randwrite", 00:37:59.068 "status": "finished", 00:37:59.068 "queue_depth": 128, 00:37:59.068 "io_size": 4096, 00:37:59.068 "runtime": 10.0031, 00:37:59.068 "iops": 14058.641821035479, 00:37:59.068 "mibps": 54.91656961341984, 00:37:59.068 "io_failed": 0, 00:37:59.068 "io_timeout": 0, 00:37:59.068 "avg_latency_us": 9098.899483866793, 00:37:59.068 "min_latency_us": 2518.275849266753, 00:37:59.068 "max_latency_us": 18005.064043066643 00:37:59.068 } 00:37:59.068 ], 00:37:59.068 "core_count": 1 00:37:59.068 } 00:37:59.068 05:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2503855 00:37:59.068 05:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2503855 ']' 00:37:59.068 05:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2503855 00:37:59.068 05:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:37:59.068 05:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:59.068 05:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2503855 00:37:59.326 05:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:59.326 05:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:59.326 05:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2503855' 00:37:59.326 killing process with pid 2503855 00:37:59.326 05:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2503855 00:37:59.326 Received shutdown signal, test time was about 10.000000 seconds 00:37:59.326 00:37:59.326 Latency(us) 00:37:59.326 [2024-10-28T04:13:49.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:59.326 [2024-10-28T04:13:49.922Z] =================================================================================================================== 00:37:59.326 [2024-10-28T04:13:49.922Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:59.326 05:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2503855 00:37:59.326 05:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:59.584 05:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:00.150 05:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 695f5f9a-a2e3-4a56-82c0-9d4b1ed7720e 00:38:00.151 05:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:00.409 05:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:00.409 05:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:00.409 05:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:00.668 [2024-10-28 05:13:51.063858] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:00.668 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 695f5f9a-a2e3-4a56-82c0-9d4b1ed7720e 00:38:00.668 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:38:00.668 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 695f5f9a-a2e3-4a56-82c0-9d4b1ed7720e 00:38:00.668 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:00.668 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:00.668 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:00.668 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:00.668 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:00.668 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:00.668 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:00.668 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:00.668 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 695f5f9a-a2e3-4a56-82c0-9d4b1ed7720e 00:38:00.927 request: 00:38:00.927 { 00:38:00.927 "uuid": "695f5f9a-a2e3-4a56-82c0-9d4b1ed7720e", 00:38:00.927 "method": "bdev_lvol_get_lvstores", 00:38:00.927 "req_id": 1 00:38:00.927 } 00:38:00.927 Got JSON-RPC error response 00:38:00.927 response: 00:38:00.927 { 00:38:00.927 "code": -19, 00:38:00.927 "message": "No such device" 00:38:00.927 } 00:38:00.927 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:38:00.927 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:00.927 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:00.927 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:00.927 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:01.186 aio_bdev 00:38:01.186 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 426c6f28-1ecf-4eca-b0b5-81f40ac1f51c 00:38:01.186 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=426c6f28-1ecf-4eca-b0b5-81f40ac1f51c 00:38:01.186 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:01.186 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:38:01.186 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:01.186 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:01.186 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:01.445 05:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 426c6f28-1ecf-4eca-b0b5-81f40ac1f51c -t 2000 00:38:01.704 [ 00:38:01.704 { 00:38:01.704 "name": "426c6f28-1ecf-4eca-b0b5-81f40ac1f51c", 00:38:01.704 "aliases": [ 00:38:01.705 "lvs/lvol" 00:38:01.705 ], 00:38:01.705 "product_name": "Logical Volume", 00:38:01.705 "block_size": 4096, 00:38:01.705 "num_blocks": 38912, 00:38:01.705 "uuid": "426c6f28-1ecf-4eca-b0b5-81f40ac1f51c", 00:38:01.705 "assigned_rate_limits": { 00:38:01.705 "rw_ios_per_sec": 0, 00:38:01.705 "rw_mbytes_per_sec": 0, 00:38:01.705 "r_mbytes_per_sec": 0, 00:38:01.705 "w_mbytes_per_sec": 0 00:38:01.705 }, 00:38:01.705 "claimed": false, 00:38:01.705 "zoned": false, 00:38:01.705 "supported_io_types": { 00:38:01.705 "read": true, 00:38:01.705 "write": true, 00:38:01.705 "unmap": true, 00:38:01.705 "flush": false, 00:38:01.705 "reset": true, 00:38:01.705 "nvme_admin": false, 00:38:01.705 "nvme_io": false, 00:38:01.705 "nvme_io_md": false, 00:38:01.705 "write_zeroes": true, 00:38:01.705 "zcopy": false, 00:38:01.705 "get_zone_info": false, 00:38:01.705 "zone_management": false, 00:38:01.705 "zone_append": false, 00:38:01.705 "compare": false, 00:38:01.705 "compare_and_write": false, 00:38:01.705 "abort": false, 00:38:01.705 "seek_hole": true, 00:38:01.705 "seek_data": true, 00:38:01.705 "copy": false, 00:38:01.705 "nvme_iov_md": false 00:38:01.705 }, 00:38:01.705 "driver_specific": { 00:38:01.705 "lvol": { 00:38:01.705 "lvol_store_uuid": "695f5f9a-a2e3-4a56-82c0-9d4b1ed7720e", 00:38:01.705 "base_bdev": "aio_bdev", 00:38:01.705 "thin_provision": false, 00:38:01.705 "num_allocated_clusters": 38, 00:38:01.705 "snapshot": false, 00:38:01.705 "clone": false, 00:38:01.705 "esnap_clone": false 00:38:01.705 } 00:38:01.705 } 00:38:01.705 } 00:38:01.705 ] 00:38:01.705 05:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:38:01.705 05:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 695f5f9a-a2e3-4a56-82c0-9d4b1ed7720e 00:38:01.705 05:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:01.963 05:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:01.963 05:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 695f5f9a-a2e3-4a56-82c0-9d4b1ed7720e 00:38:01.963 05:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:02.222 05:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:02.222 05:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 426c6f28-1ecf-4eca-b0b5-81f40ac1f51c 00:38:02.480 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 695f5f9a-a2e3-4a56-82c0-9d4b1ed7720e 00:38:03.047 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:03.304 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:03.304 00:38:03.304 real 0m18.201s 00:38:03.304 user 0m17.689s 00:38:03.305 sys 0m1.860s 00:38:03.305 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:03.305 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:03.305 ************************************ 00:38:03.305 END TEST lvs_grow_clean 00:38:03.305 ************************************ 00:38:03.305 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:03.305 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:03.305 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:03.305 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:03.305 ************************************ 00:38:03.305 START TEST lvs_grow_dirty 00:38:03.305 ************************************ 00:38:03.305 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:38:03.305 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:03.305 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:03.305 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:03.305 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:03.305 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:03.305 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:03.305 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:03.305 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:03.305 05:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:03.562 05:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:03.562 05:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:03.819 05:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=83a691bc-bd22-4e8d-9263-0793ec26a000 00:38:03.819 05:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83a691bc-bd22-4e8d-9263-0793ec26a000 00:38:03.819 05:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:04.076 05:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:04.076 05:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:04.076 05:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 83a691bc-bd22-4e8d-9263-0793ec26a000 lvol 150 00:38:04.640 05:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e88ab9a8-67e5-486e-8639-73e10703851a 00:38:04.640 05:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:04.640 05:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:04.640 [2024-10-28 05:13:55.199810] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:04.640 [2024-10-28 05:13:55.199925] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:04.640 true 00:38:04.640 05:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83a691bc-bd22-4e8d-9263-0793ec26a000 00:38:04.640 05:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:05.205 05:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:05.205 05:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:05.205 05:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e88ab9a8-67e5-486e-8639-73e10703851a 00:38:05.768 05:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:05.768 [2024-10-28 05:13:56.328093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:05.768 05:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:06.334 05:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2505903 00:38:06.334 05:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:06.334 05:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:06.334 05:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2505903 /var/tmp/bdevperf.sock 00:38:06.334 05:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2505903 ']' 00:38:06.334 05:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:06.334 05:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:06.334 05:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:06.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:06.334 05:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:06.334 05:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:06.334 [2024-10-28 05:13:56.672833] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:38:06.334 [2024-10-28 05:13:56.672914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505903 ] 00:38:06.334 [2024-10-28 05:13:56.807490] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:06.334 [2024-10-28 05:13:56.845305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.334 [2024-10-28 05:13:56.892213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:06.592 05:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:06.592 05:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:38:06.592 05:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:07.158 Nvme0n1 00:38:07.158 05:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:07.420 [ 00:38:07.420 { 00:38:07.420 "name": "Nvme0n1", 00:38:07.420 "aliases": [ 00:38:07.420 "e88ab9a8-67e5-486e-8639-73e10703851a" 00:38:07.420 ], 00:38:07.420 "product_name": "NVMe disk", 00:38:07.420 "block_size": 4096, 00:38:07.420 "num_blocks": 38912, 00:38:07.420 "uuid": "e88ab9a8-67e5-486e-8639-73e10703851a", 00:38:07.420 "numa_id": 0, 00:38:07.420 "assigned_rate_limits": { 00:38:07.420 "rw_ios_per_sec": 0, 00:38:07.420 "rw_mbytes_per_sec": 0, 00:38:07.420 "r_mbytes_per_sec": 0, 00:38:07.420 "w_mbytes_per_sec": 0 00:38:07.420 }, 00:38:07.420 "claimed": false, 00:38:07.420 "zoned": false, 00:38:07.420 "supported_io_types": { 00:38:07.420 "read": true, 00:38:07.420 "write": true, 00:38:07.420 "unmap": true, 00:38:07.420 "flush": true, 00:38:07.420 "reset": true, 00:38:07.420 "nvme_admin": true, 00:38:07.420 "nvme_io": true, 00:38:07.420 "nvme_io_md": false, 00:38:07.420 "write_zeroes": true, 00:38:07.420 "zcopy": false, 00:38:07.420 "get_zone_info": false, 00:38:07.420 "zone_management": false, 00:38:07.420 "zone_append": false, 00:38:07.420 "compare": true, 00:38:07.420 "compare_and_write": true, 00:38:07.420 "abort": true, 00:38:07.420 "seek_hole": false, 00:38:07.420 "seek_data": false, 00:38:07.420 "copy": true, 00:38:07.420 "nvme_iov_md": false 00:38:07.420 }, 00:38:07.420 "memory_domains": [ 00:38:07.420 { 00:38:07.420 "dma_device_id": "system", 00:38:07.420 "dma_device_type": 1 00:38:07.420 } 00:38:07.420 ], 00:38:07.420 "driver_specific": { 00:38:07.420 "nvme": [ 00:38:07.420 { 00:38:07.420 "trid": { 00:38:07.420 "trtype": "TCP", 00:38:07.420 "adrfam": "IPv4", 00:38:07.420 "traddr": "10.0.0.2", 00:38:07.420 "trsvcid": "4420", 00:38:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:07.420 }, 00:38:07.420 "ctrlr_data": { 00:38:07.420 "cntlid": 1, 00:38:07.420 "vendor_id": "0x8086", 00:38:07.420 "model_number": "SPDK bdev Controller", 00:38:07.420 "serial_number": "SPDK0", 00:38:07.420 "firmware_revision": "25.01", 00:38:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:07.420 "oacs": { 00:38:07.420 "security": 0, 00:38:07.420 "format": 0, 00:38:07.420 "firmware": 0, 00:38:07.420 "ns_manage": 0 00:38:07.420 }, 00:38:07.420 "multi_ctrlr": true, 00:38:07.420 "ana_reporting": false 00:38:07.420 }, 00:38:07.420 "vs": { 00:38:07.420 "nvme_version": "1.3" 00:38:07.420 }, 00:38:07.420 "ns_data": { 00:38:07.420 "id": 1, 00:38:07.420 "can_share": true 00:38:07.420 } 00:38:07.420 } 00:38:07.420 ], 00:38:07.420 "mp_policy": "active_passive" 00:38:07.420 } 00:38:07.420 } 00:38:07.420 ] 00:38:07.420 05:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2506111 00:38:07.420 05:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:07.420 05:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:07.420 Running I/O for 10 seconds... 00:38:08.409 Latency(us) 00:38:08.409 [2024-10-28T04:13:59.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:08.409 Nvme0n1 : 1.00 13436.00 52.48 0.00 0.00 0.00 0.00 0.00 00:38:08.409 [2024-10-28T04:13:59.005Z] =================================================================================================================== 00:38:08.409 [2024-10-28T04:13:59.005Z] Total : 13436.00 52.48 0.00 0.00 0.00 0.00 0.00 00:38:08.409 00:38:09.343 05:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 83a691bc-bd22-4e8d-9263-0793ec26a000 00:38:09.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:09.601 Nvme0n1 : 2.00 13532.00 52.86 0.00 0.00 0.00 0.00 0.00 00:38:09.601 [2024-10-28T04:14:00.197Z] =================================================================================================================== 00:38:09.601 [2024-10-28T04:14:00.197Z] Total : 13532.00 52.86 0.00 0.00 0.00 0.00 0.00 00:38:09.601 00:38:09.601 true 00:38:09.601 05:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83a691bc-bd22-4e8d-9263-0793ec26a000 00:38:09.601 05:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:10.166 05:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:10.166 05:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:10.166 05:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2506111 00:38:10.425 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:10.425 Nvme0n1 : 3.00 13585.67 53.07 0.00 0.00 0.00 0.00 0.00 00:38:10.425 [2024-10-28T04:14:01.021Z] =================================================================================================================== 00:38:10.425 [2024-10-28T04:14:01.021Z] Total : 13585.67 53.07 0.00 0.00 0.00 0.00 0.00 00:38:10.425 00:38:11.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:11.360 Nvme0n1 : 4.00 13676.75 53.42 0.00 0.00 0.00 0.00 0.00 00:38:11.360 [2024-10-28T04:14:01.956Z] =================================================================================================================== 00:38:11.360 [2024-10-28T04:14:01.956Z] Total : 13676.75 53.42 0.00 0.00 0.00 0.00 0.00 00:38:11.360 00:38:12.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:12.736 Nvme0n1 : 5.00 13719.00 53.59 0.00 0.00 0.00 0.00 0.00 00:38:12.736 [2024-10-28T04:14:03.332Z] =================================================================================================================== 00:38:12.736 [2024-10-28T04:14:03.332Z] Total : 13719.00 53.59 0.00 0.00 0.00 0.00 0.00 00:38:12.736 00:38:13.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:13.670 Nvme0n1 : 6.00 13778.17 53.82 0.00 0.00 0.00 0.00 0.00 00:38:13.670 [2024-10-28T04:14:04.266Z] =================================================================================================================== 00:38:13.670 [2024-10-28T04:14:04.266Z] Total : 13778.17 53.82 0.00 0.00 0.00 0.00 0.00 00:38:13.670 00:38:14.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:14.606 Nvme0n1 : 7.00 13811.14 53.95 0.00 0.00 0.00 0.00 0.00 00:38:14.606 [2024-10-28T04:14:05.202Z] =================================================================================================================== 00:38:14.606 [2024-10-28T04:14:05.202Z] Total : 13811.14 53.95 0.00 0.00 0.00 0.00 0.00 00:38:14.606 00:38:15.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:15.541 Nvme0n1 : 8.00 13979.00 54.61 0.00 0.00 0.00 0.00 0.00 00:38:15.541 [2024-10-28T04:14:06.137Z] =================================================================================================================== 00:38:15.541 [2024-10-28T04:14:06.137Z] Total : 13979.00 54.61 0.00 0.00 0.00 0.00 0.00 00:38:15.541 00:38:16.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:16.476 Nvme0n1 : 9.00 14060.22 54.92 0.00 0.00 0.00 0.00 0.00 00:38:16.476 [2024-10-28T04:14:07.072Z] =================================================================================================================== 00:38:16.476 [2024-10-28T04:14:07.072Z] Total : 14060.22 54.92 0.00 0.00 0.00 0.00 0.00 00:38:16.476 00:38:17.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:17.409 Nvme0n1 : 10.00 14061.00 54.93 0.00 0.00 0.00 0.00 0.00 00:38:17.409 [2024-10-28T04:14:08.005Z] =================================================================================================================== 00:38:17.409 [2024-10-28T04:14:08.005Z] Total : 14061.00 54.93 0.00 0.00 0.00 0.00 0.00 00:38:17.409 00:38:17.409 00:38:17.409 Latency(us) 00:38:17.409 [2024-10-28T04:14:08.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:17.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:17.409 Nvme0n1 : 10.01 14060.34 54.92 0.00 0.00 9097.49 5109.55 19756.91 00:38:17.409 [2024-10-28T04:14:08.005Z] =================================================================================================================== 00:38:17.409 [2024-10-28T04:14:08.005Z] Total : 14060.34 54.92 0.00 0.00 9097.49 5109.55 19756.91 00:38:17.409 { 00:38:17.409 "results": [ 00:38:17.409 { 00:38:17.409 "job": "Nvme0n1", 00:38:17.409 "core_mask": "0x2", 00:38:17.409 "workload": "randwrite", 00:38:17.409 "status": "finished", 00:38:17.409 "queue_depth": 128, 00:38:17.409 "io_size": 4096, 00:38:17.409 "runtime": 10.009576, 00:38:17.409 "iops": 14060.335822416455, 00:38:17.409 "mibps": 54.92318680631428, 00:38:17.409 "io_failed": 0, 00:38:17.409 "io_timeout": 0, 00:38:17.409 "avg_latency_us": 9097.4910513977, 00:38:17.409 "min_latency_us": 5109.545201410804, 00:38:17.409 "max_latency_us": 19756.908112121775 00:38:17.409 } 00:38:17.409 ], 00:38:17.409 "core_count": 1 00:38:17.409 } 00:38:17.409 05:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2505903 00:38:17.409 05:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2505903 ']' 00:38:17.409 05:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2505903 00:38:17.409 05:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:38:17.409 05:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:17.409 05:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2505903 00:38:17.667 05:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:17.667 05:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:17.667 05:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2505903' 00:38:17.667 killing process with pid 2505903 00:38:17.667 05:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2505903 00:38:17.667 Received shutdown signal, test time was about 10.000000 seconds 00:38:17.667 00:38:17.667 Latency(us) 00:38:17.667 [2024-10-28T04:14:08.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:17.667 [2024-10-28T04:14:08.263Z] =================================================================================================================== 00:38:17.667 [2024-10-28T04:14:08.263Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:17.667 05:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2505903 00:38:17.667 05:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:17.924 05:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:18.182 05:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83a691bc-bd22-4e8d-9263-0793ec26a000 00:38:18.182 05:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:18.749 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:18.749 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:18.749 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2503356 00:38:18.749 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2503356 00:38:18.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2503356 Killed "${NVMF_APP[@]}" "$@" 00:38:18.749 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:18.749 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:18.749 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:18.749 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:18.749 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:18.749 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=2507313 00:38:18.749 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:18.749 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 2507313 00:38:18.749 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2507313 ']' 00:38:18.750 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:18.750 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:18.750 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:18.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:18.750 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:18.750 05:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:18.750 [2024-10-28 05:14:09.132260] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:18.750 [2024-10-28 05:14:09.133392] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:38:18.750 [2024-10-28 05:14:09.133452] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:18.750 [2024-10-28 05:14:09.273158] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:18.750 [2024-10-28 05:14:09.315119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:19.009 [2024-10-28 05:14:09.364480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:19.009 [2024-10-28 05:14:09.364543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:19.009 [2024-10-28 05:14:09.364559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:19.009 [2024-10-28 05:14:09.364573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:19.009 [2024-10-28 05:14:09.364584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:19.009 [2024-10-28 05:14:09.365225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:19.009 [2024-10-28 05:14:09.464006] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:19.009 [2024-10-28 05:14:09.464403] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:19.574 05:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:19.574 05:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:38:19.574 05:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:19.574 05:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:19.574 05:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:19.574 05:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:19.574 05:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:20.140 [2024-10-28 05:14:10.439825] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:20.140 [2024-10-28 05:14:10.439961] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:20.140 [2024-10-28 05:14:10.440010] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:20.140 05:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:20.140 05:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e88ab9a8-67e5-486e-8639-73e10703851a 00:38:20.140 05:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e88ab9a8-67e5-486e-8639-73e10703851a 00:38:20.140 05:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:20.140 05:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:38:20.140 05:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:20.140 05:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:20.140 05:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:20.140 05:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e88ab9a8-67e5-486e-8639-73e10703851a -t 2000 00:38:20.706 [ 00:38:20.706 { 00:38:20.706 "name": "e88ab9a8-67e5-486e-8639-73e10703851a", 00:38:20.706 "aliases": [ 00:38:20.706 "lvs/lvol" 00:38:20.706 ], 00:38:20.706 "product_name": "Logical Volume", 00:38:20.706 "block_size": 4096, 00:38:20.706 "num_blocks": 38912, 00:38:20.706 "uuid": "e88ab9a8-67e5-486e-8639-73e10703851a", 00:38:20.706 "assigned_rate_limits": { 00:38:20.706 "rw_ios_per_sec": 0, 00:38:20.706 "rw_mbytes_per_sec": 0, 00:38:20.706 "r_mbytes_per_sec": 0, 00:38:20.706 "w_mbytes_per_sec": 0 00:38:20.706 }, 00:38:20.706 "claimed": false, 00:38:20.706 "zoned": false, 00:38:20.706 "supported_io_types": { 00:38:20.706 "read": true, 00:38:20.706 "write": true, 00:38:20.706 "unmap": true, 00:38:20.706 "flush": false, 00:38:20.706 "reset": true, 00:38:20.706 "nvme_admin": false, 00:38:20.706 "nvme_io": false, 00:38:20.706 "nvme_io_md": false, 00:38:20.706 "write_zeroes": true, 00:38:20.706 "zcopy": false, 00:38:20.706 "get_zone_info": false, 00:38:20.706 "zone_management": false, 00:38:20.706 "zone_append": false, 00:38:20.706 "compare": false, 00:38:20.706 "compare_and_write": false, 00:38:20.706 "abort": false, 00:38:20.706 "seek_hole": true, 00:38:20.706 "seek_data": true, 00:38:20.706 "copy": false, 00:38:20.706 "nvme_iov_md": false 00:38:20.706 }, 00:38:20.706 "driver_specific": { 00:38:20.706 "lvol": { 00:38:20.706 "lvol_store_uuid": "83a691bc-bd22-4e8d-9263-0793ec26a000", 00:38:20.706 "base_bdev": "aio_bdev", 00:38:20.706 "thin_provision": false, 00:38:20.706 "num_allocated_clusters": 38, 00:38:20.706 "snapshot": false, 00:38:20.706 "clone": false, 00:38:20.706 "esnap_clone": false 00:38:20.706 } 00:38:20.706 } 00:38:20.706 } 00:38:20.706 ] 00:38:20.706 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:38:20.706 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83a691bc-bd22-4e8d-9263-0793ec26a000 00:38:20.706 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:20.706 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:20.706 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83a691bc-bd22-4e8d-9263-0793ec26a000 00:38:20.706 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:21.272 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:21.272 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:21.272 [2024-10-28 05:14:11.837829] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:21.272 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83a691bc-bd22-4e8d-9263-0793ec26a000 00:38:21.272 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:38:21.272 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83a691bc-bd22-4e8d-9263-0793ec26a000 00:38:21.272 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:21.273 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:21.273 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:21.531 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:21.531 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:21.531 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:21.531 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:21.531 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:21.531 05:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83a691bc-bd22-4e8d-9263-0793ec26a000 00:38:21.790 request: 00:38:21.790 { 00:38:21.790 "uuid": "83a691bc-bd22-4e8d-9263-0793ec26a000", 00:38:21.790 "method": "bdev_lvol_get_lvstores", 00:38:21.790 "req_id": 1 00:38:21.790 } 00:38:21.790 Got JSON-RPC error response 00:38:21.790 response: 00:38:21.790 { 00:38:21.790 "code": -19, 00:38:21.790 "message": "No such device" 00:38:21.790 } 00:38:21.790 05:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:38:21.790 05:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:21.790 05:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:21.790 05:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:21.790 05:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:22.049 aio_bdev 00:38:22.049 05:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e88ab9a8-67e5-486e-8639-73e10703851a 00:38:22.049 05:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e88ab9a8-67e5-486e-8639-73e10703851a 00:38:22.049 05:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:22.049 05:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:38:22.049 05:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:22.049 05:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:22.049 05:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:22.308 05:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e88ab9a8-67e5-486e-8639-73e10703851a -t 2000 00:38:22.567 [ 00:38:22.567 { 00:38:22.567 "name": "e88ab9a8-67e5-486e-8639-73e10703851a", 00:38:22.567 "aliases": [ 00:38:22.567 "lvs/lvol" 00:38:22.567 ], 00:38:22.567 "product_name": "Logical Volume", 00:38:22.567 "block_size": 4096, 00:38:22.567 "num_blocks": 38912, 00:38:22.567 "uuid": "e88ab9a8-67e5-486e-8639-73e10703851a", 00:38:22.567 "assigned_rate_limits": { 00:38:22.567 "rw_ios_per_sec": 0, 00:38:22.567 "rw_mbytes_per_sec": 0, 00:38:22.567 "r_mbytes_per_sec": 0, 00:38:22.567 "w_mbytes_per_sec": 0 00:38:22.567 }, 00:38:22.567 "claimed": false, 00:38:22.567 "zoned": false, 00:38:22.567 "supported_io_types": { 00:38:22.567 "read": true, 00:38:22.567 "write": true, 00:38:22.567 "unmap": true, 00:38:22.567 "flush": false, 00:38:22.567 "reset": true, 00:38:22.567 "nvme_admin": false, 00:38:22.567 "nvme_io": false, 00:38:22.567 "nvme_io_md": false, 00:38:22.567 "write_zeroes": true, 00:38:22.567 "zcopy": false, 00:38:22.567 "get_zone_info": false, 00:38:22.567 "zone_management": false, 00:38:22.567 "zone_append": false, 00:38:22.567 "compare": false, 00:38:22.567 "compare_and_write": false, 00:38:22.567 "abort": false, 00:38:22.567 "seek_hole": true, 00:38:22.567 "seek_data": true, 00:38:22.567 "copy": false, 00:38:22.567 "nvme_iov_md": false 00:38:22.567 }, 00:38:22.567 "driver_specific": { 00:38:22.567 "lvol": { 00:38:22.567 "lvol_store_uuid": "83a691bc-bd22-4e8d-9263-0793ec26a000", 00:38:22.567 "base_bdev": "aio_bdev", 00:38:22.567 "thin_provision": false, 00:38:22.567 "num_allocated_clusters": 38, 00:38:22.567 "snapshot": false, 00:38:22.567 "clone": false, 00:38:22.568 "esnap_clone": false 00:38:22.568 } 00:38:22.568 } 00:38:22.568 } 00:38:22.568 ] 00:38:22.568 05:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:38:22.568 05:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83a691bc-bd22-4e8d-9263-0793ec26a000 00:38:22.568 05:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:22.826 05:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:22.826 05:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83a691bc-bd22-4e8d-9263-0793ec26a000 00:38:22.826 05:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:23.084 05:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:23.084 05:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e88ab9a8-67e5-486e-8639-73e10703851a 00:38:23.343 05:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 83a691bc-bd22-4e8d-9263-0793ec26a000 00:38:23.601 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:24.168 00:38:24.168 real 0m20.754s 00:38:24.168 user 0m37.267s 00:38:24.168 sys 0m4.678s 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:24.168 ************************************ 00:38:24.168 END TEST lvs_grow_dirty 00:38:24.168 ************************************ 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:24.168 nvmf_trace.0 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:24.168 rmmod nvme_tcp 00:38:24.168 rmmod nvme_fabrics 00:38:24.168 rmmod nvme_keyring 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 2507313 ']' 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 2507313 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2507313 ']' 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2507313 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2507313 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2507313' 00:38:24.168 killing process with pid 2507313 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2507313 00:38:24.168 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2507313 00:38:24.426 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:24.426 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:24.426 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:24.426 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:24.426 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:38:24.426 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:24.426 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:38:24.426 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:24.426 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:24.426 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:24.426 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:24.426 05:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:26.329 05:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:26.329 00:38:26.329 real 0m45.088s 00:38:26.329 user 0m56.826s 00:38:26.329 sys 0m8.603s 00:38:26.329 05:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:26.329 05:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:26.329 ************************************ 00:38:26.329 END TEST nvmf_lvs_grow 00:38:26.329 ************************************ 00:38:26.329 05:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:26.329 05:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:26.329 05:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:26.329 05:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:26.589 ************************************ 00:38:26.589 START TEST nvmf_bdev_io_wait 00:38:26.589 ************************************ 00:38:26.589 05:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:26.589 * Looking for test storage... 00:38:26.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:26.589 05:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:38:26.589 05:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # lcov --version 00:38:26.589 05:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:38:26.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:26.589 --rc genhtml_branch_coverage=1 00:38:26.589 --rc genhtml_function_coverage=1 00:38:26.589 --rc genhtml_legend=1 00:38:26.589 --rc geninfo_all_blocks=1 00:38:26.589 --rc geninfo_unexecuted_blocks=1 00:38:26.589 00:38:26.589 ' 00:38:26.589 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:38:26.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:26.589 --rc genhtml_branch_coverage=1 00:38:26.589 --rc genhtml_function_coverage=1 00:38:26.590 --rc genhtml_legend=1 00:38:26.590 --rc geninfo_all_blocks=1 00:38:26.590 --rc geninfo_unexecuted_blocks=1 00:38:26.590 00:38:26.590 ' 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:38:26.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:26.590 --rc genhtml_branch_coverage=1 00:38:26.590 --rc genhtml_function_coverage=1 00:38:26.590 --rc genhtml_legend=1 00:38:26.590 --rc geninfo_all_blocks=1 00:38:26.590 --rc geninfo_unexecuted_blocks=1 00:38:26.590 00:38:26.590 ' 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:38:26.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:26.590 --rc genhtml_branch_coverage=1 00:38:26.590 --rc genhtml_function_coverage=1 00:38:26.590 --rc genhtml_legend=1 00:38:26.590 --rc geninfo_all_blocks=1 00:38:26.590 --rc geninfo_unexecuted_blocks=1 00:38:26.590 00:38:26.590 ' 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:26.590 05:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:28.497 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:28.497 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:28.498 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:28.498 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:28.498 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:28.498 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:28.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:28.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:38:28.757 00:38:28.757 --- 10.0.0.2 ping statistics --- 00:38:28.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:28.757 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:28.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:28.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:38:28.757 00:38:28.757 --- 10.0.0.1 ping statistics --- 00:38:28.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:28.757 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:28.757 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:28.758 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:28.758 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=2509933 00:38:28.758 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:28.758 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 2509933 00:38:28.758 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2509933 ']' 00:38:28.758 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:28.758 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:28.758 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:28.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:28.758 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:28.758 05:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:28.758 [2024-10-28 05:14:19.221787] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:28.758 [2024-10-28 05:14:19.222868] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:38:28.758 [2024-10-28 05:14:19.222938] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:29.017 [2024-10-28 05:14:19.362177] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:29.017 [2024-10-28 05:14:19.398801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:29.017 [2024-10-28 05:14:19.448746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:29.017 [2024-10-28 05:14:19.448798] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:29.017 [2024-10-28 05:14:19.448828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:29.017 [2024-10-28 05:14:19.448840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:29.017 [2024-10-28 05:14:19.448850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:29.017 [2024-10-28 05:14:19.450429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:29.017 [2024-10-28 05:14:19.450493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:29.017 [2024-10-28 05:14:19.450561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:29.017 [2024-10-28 05:14:19.450564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:29.017 [2024-10-28 05:14:19.451242] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:29.951 [2024-10-28 05:14:20.340443] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:29.951 [2024-10-28 05:14:20.340660] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:29.951 [2024-10-28 05:14:20.341593] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:29.951 [2024-10-28 05:14:20.342517] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:29.951 [2024-10-28 05:14:20.347475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:29.951 Malloc0 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:29.951 [2024-10-28 05:14:20.403604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:29.951 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2510085 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2510087 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:29.952 { 00:38:29.952 "params": { 00:38:29.952 "name": "Nvme$subsystem", 00:38:29.952 "trtype": "$TEST_TRANSPORT", 00:38:29.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:29.952 "adrfam": "ipv4", 00:38:29.952 "trsvcid": "$NVMF_PORT", 00:38:29.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:29.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:29.952 "hdgst": ${hdgst:-false}, 00:38:29.952 "ddgst": ${ddgst:-false} 00:38:29.952 }, 00:38:29.952 "method": "bdev_nvme_attach_controller" 00:38:29.952 } 00:38:29.952 EOF 00:38:29.952 )") 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2510089 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:29.952 { 00:38:29.952 "params": { 00:38:29.952 "name": "Nvme$subsystem", 00:38:29.952 "trtype": "$TEST_TRANSPORT", 00:38:29.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:29.952 "adrfam": "ipv4", 00:38:29.952 "trsvcid": "$NVMF_PORT", 00:38:29.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:29.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:29.952 "hdgst": ${hdgst:-false}, 00:38:29.952 "ddgst": ${ddgst:-false} 00:38:29.952 }, 00:38:29.952 "method": "bdev_nvme_attach_controller" 00:38:29.952 } 00:38:29.952 EOF 00:38:29.952 )") 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2510092 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:29.952 { 00:38:29.952 "params": { 00:38:29.952 "name": "Nvme$subsystem", 00:38:29.952 "trtype": "$TEST_TRANSPORT", 00:38:29.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:29.952 "adrfam": "ipv4", 00:38:29.952 "trsvcid": "$NVMF_PORT", 00:38:29.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:29.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:29.952 "hdgst": ${hdgst:-false}, 00:38:29.952 "ddgst": ${ddgst:-false} 00:38:29.952 }, 00:38:29.952 "method": "bdev_nvme_attach_controller" 00:38:29.952 } 00:38:29.952 EOF 00:38:29.952 )") 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:29.952 { 00:38:29.952 "params": { 00:38:29.952 "name": "Nvme$subsystem", 00:38:29.952 "trtype": "$TEST_TRANSPORT", 00:38:29.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:29.952 "adrfam": "ipv4", 00:38:29.952 "trsvcid": "$NVMF_PORT", 00:38:29.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:29.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:29.952 "hdgst": ${hdgst:-false}, 00:38:29.952 "ddgst": ${ddgst:-false} 00:38:29.952 }, 00:38:29.952 "method": "bdev_nvme_attach_controller" 00:38:29.952 } 00:38:29.952 EOF 00:38:29.952 )") 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2510085 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:29.952 "params": { 00:38:29.952 "name": "Nvme1", 00:38:29.952 "trtype": "tcp", 00:38:29.952 "traddr": "10.0.0.2", 00:38:29.952 "adrfam": "ipv4", 00:38:29.952 "trsvcid": "4420", 00:38:29.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:29.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:29.952 "hdgst": false, 00:38:29.952 "ddgst": false 00:38:29.952 }, 00:38:29.952 "method": "bdev_nvme_attach_controller" 00:38:29.952 }' 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:29.952 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:29.952 "params": { 00:38:29.952 "name": "Nvme1", 00:38:29.952 "trtype": "tcp", 00:38:29.952 "traddr": "10.0.0.2", 00:38:29.952 "adrfam": "ipv4", 00:38:29.952 "trsvcid": "4420", 00:38:29.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:29.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:29.952 "hdgst": false, 00:38:29.952 "ddgst": false 00:38:29.952 }, 00:38:29.953 "method": "bdev_nvme_attach_controller" 00:38:29.953 }' 00:38:29.953 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:29.953 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:29.953 "params": { 00:38:29.953 "name": "Nvme1", 00:38:29.953 "trtype": "tcp", 00:38:29.953 "traddr": "10.0.0.2", 00:38:29.953 "adrfam": "ipv4", 00:38:29.953 "trsvcid": "4420", 00:38:29.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:29.953 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:29.953 "hdgst": false, 00:38:29.953 "ddgst": false 00:38:29.953 }, 00:38:29.953 "method": "bdev_nvme_attach_controller" 00:38:29.953 }' 00:38:29.953 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:29.953 05:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:29.953 "params": { 00:38:29.953 "name": "Nvme1", 00:38:29.953 "trtype": "tcp", 00:38:29.953 "traddr": "10.0.0.2", 00:38:29.953 "adrfam": "ipv4", 00:38:29.953 "trsvcid": "4420", 00:38:29.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:29.953 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:29.953 "hdgst": false, 00:38:29.953 "ddgst": false 00:38:29.953 }, 00:38:29.953 "method": "bdev_nvme_attach_controller" 00:38:29.953 }' 00:38:29.953 [2024-10-28 05:14:20.454895] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:38:29.953 [2024-10-28 05:14:20.454895] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:38:29.953 [2024-10-28 05:14:20.454923] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:38:29.953 [2024-10-28 05:14:20.454922] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:38:29.953 [2024-10-28 05:14:20.455001] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-28 05:14:20.455001] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:38:29.953 --proc-type=auto ] 00:38:29.953 [2024-10-28 05:14:20.455017] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-28 05:14:20.455017] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:29.953 --proc-type=auto ] 00:38:30.211 [2024-10-28 05:14:20.713217] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:30.211 [2024-10-28 05:14:20.751588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.211 [2024-10-28 05:14:20.799183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:30.470 [2024-10-28 05:14:20.813404] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:30.470 [2024-10-28 05:14:20.852238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.470 [2024-10-28 05:14:20.887740] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:30.470 [2024-10-28 05:14:20.896763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:30.470 [2024-10-28 05:14:20.926367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.470 [2024-10-28 05:14:20.963870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:30.470 [2024-10-28 05:14:20.967049] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:30.470 Running I/O for 1 seconds... 00:38:30.470 [2024-10-28 05:14:21.006043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.470 [2024-10-28 05:14:21.042054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:30.729 Running I/O for 1 seconds... 00:38:30.729 Running I/O for 1 seconds... 00:38:30.987 Running I/O for 1 seconds... 00:38:31.554 7515.00 IOPS, 29.36 MiB/s 00:38:31.554 Latency(us) 00:38:31.554 [2024-10-28T04:14:22.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:31.554 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:31.554 Nvme1n1 : 1.02 7490.82 29.26 0.00 0.00 16947.70 4403.94 33674.34 00:38:31.554 [2024-10-28T04:14:22.150Z] =================================================================================================================== 00:38:31.554 [2024-10-28T04:14:22.150Z] Total : 7490.82 29.26 0.00 0.00 16947.70 4403.94 33674.34 00:38:31.554 196736.00 IOPS, 768.50 MiB/s 00:38:31.554 Latency(us) 00:38:31.554 [2024-10-28T04:14:22.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:31.554 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:31.554 Nvme1n1 : 1.00 196364.04 767.05 0.00 0.00 648.36 293.49 1873.50 00:38:31.554 [2024-10-28T04:14:22.150Z] =================================================================================================================== 00:38:31.554 [2024-10-28T04:14:22.150Z] Total : 196364.04 767.05 0.00 0.00 648.36 293.49 1873.50 00:38:31.813 8688.00 IOPS, 33.94 MiB/s 00:38:31.813 Latency(us) 00:38:31.813 [2024-10-28T04:14:22.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:31.813 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:31.813 Nvme1n1 : 1.01 8741.37 34.15 0.00 0.00 14571.50 2408.79 21022.13 00:38:31.813 [2024-10-28T04:14:22.409Z] =================================================================================================================== 00:38:31.813 [2024-10-28T04:14:22.409Z] Total : 8741.37 34.15 0.00 0.00 14571.50 2408.79 21022.13 00:38:31.813 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2510087 00:38:31.813 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2510089 00:38:31.813 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2510092 00:38:31.813 7815.00 IOPS, 30.53 MiB/s 00:38:31.813 Latency(us) 00:38:31.813 [2024-10-28T04:14:22.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:31.813 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:31.813 Nvme1n1 : 1.01 7916.00 30.92 0.00 0.00 16124.16 3649.68 41265.66 00:38:31.813 [2024-10-28T04:14:22.409Z] =================================================================================================================== 00:38:31.813 [2024-10-28T04:14:22.409Z] Total : 7916.00 30.92 0.00 0.00 16124.16 3649.68 41265.66 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:32.071 rmmod nvme_tcp 00:38:32.071 rmmod nvme_fabrics 00:38:32.071 rmmod nvme_keyring 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 2509933 ']' 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 2509933 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2509933 ']' 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2509933 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2509933 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2509933' 00:38:32.071 killing process with pid 2509933 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2509933 00:38:32.071 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2509933 00:38:32.330 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:32.330 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:32.330 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:32.330 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:32.330 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:38:32.330 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:32.330 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:38:32.330 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:32.330 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:32.330 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:32.330 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:32.330 05:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:34.234 05:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:34.234 00:38:34.234 real 0m7.883s 00:38:34.234 user 0m14.723s 00:38:34.234 sys 0m3.825s 00:38:34.234 05:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:34.234 05:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.494 ************************************ 00:38:34.494 END TEST nvmf_bdev_io_wait 00:38:34.494 ************************************ 00:38:34.494 05:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:34.494 05:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:34.494 05:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:34.494 05:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:34.494 ************************************ 00:38:34.494 START TEST nvmf_queue_depth 00:38:34.494 ************************************ 00:38:34.494 05:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:34.494 * Looking for test storage... 00:38:34.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:34.494 05:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:38:34.494 05:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # lcov --version 00:38:34.494 05:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:38:34.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:34.494 --rc genhtml_branch_coverage=1 00:38:34.494 --rc genhtml_function_coverage=1 00:38:34.494 --rc genhtml_legend=1 00:38:34.494 --rc geninfo_all_blocks=1 00:38:34.494 --rc geninfo_unexecuted_blocks=1 00:38:34.494 00:38:34.494 ' 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:38:34.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:34.494 --rc genhtml_branch_coverage=1 00:38:34.494 --rc genhtml_function_coverage=1 00:38:34.494 --rc genhtml_legend=1 00:38:34.494 --rc geninfo_all_blocks=1 00:38:34.494 --rc geninfo_unexecuted_blocks=1 00:38:34.494 00:38:34.494 ' 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:38:34.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:34.494 --rc genhtml_branch_coverage=1 00:38:34.494 --rc genhtml_function_coverage=1 00:38:34.494 --rc genhtml_legend=1 00:38:34.494 --rc geninfo_all_blocks=1 00:38:34.494 --rc geninfo_unexecuted_blocks=1 00:38:34.494 00:38:34.494 ' 00:38:34.494 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:38:34.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:34.495 --rc genhtml_branch_coverage=1 00:38:34.495 --rc genhtml_function_coverage=1 00:38:34.495 --rc genhtml_legend=1 00:38:34.495 --rc geninfo_all_blocks=1 00:38:34.495 --rc geninfo_unexecuted_blocks=1 00:38:34.495 00:38:34.495 ' 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:34.495 05:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:36.401 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:36.401 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:36.401 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:36.402 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:36.402 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:36.402 05:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:36.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:36.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:38:36.661 00:38:36.661 --- 10.0.0.2 ping statistics --- 00:38:36.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:36.661 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:36.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:36.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:38:36.661 00:38:36.661 --- 10.0.0.1 ping statistics --- 00:38:36.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:36.661 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=2512292 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 2512292 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2512292 ']' 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:36.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:36.661 05:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:36.661 [2024-10-28 05:14:27.175388] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:36.661 [2024-10-28 05:14:27.176536] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:38:36.661 [2024-10-28 05:14:27.176614] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:36.921 [2024-10-28 05:14:27.321067] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:36.921 [2024-10-28 05:14:27.363563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.921 [2024-10-28 05:14:27.412791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:36.921 [2024-10-28 05:14:27.412858] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:36.921 [2024-10-28 05:14:27.412875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:36.921 [2024-10-28 05:14:27.412888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:36.921 [2024-10-28 05:14:27.412900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:36.921 [2024-10-28 05:14:27.413554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:36.921 [2024-10-28 05:14:27.508121] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:36.921 [2024-10-28 05:14:27.508476] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:37.854 [2024-10-28 05:14:28.214197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:37.854 Malloc0 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:37.854 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:37.855 [2024-10-28 05:14:28.278325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2512441 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2512441 /var/tmp/bdevperf.sock 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2512441 ']' 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:37.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:37.855 05:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:37.855 [2024-10-28 05:14:28.329431] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:38:37.855 [2024-10-28 05:14:28.329508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512441 ] 00:38:38.113 [2024-10-28 05:14:28.462084] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:38.113 [2024-10-28 05:14:28.499004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.113 [2024-10-28 05:14:28.547366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:39.048 05:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:39.048 05:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:39.048 05:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:39.048 05:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:39.048 05:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:39.048 NVMe0n1 00:38:39.048 05:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:39.048 05:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:39.306 Running I/O for 10 seconds... 00:38:41.238 7941.00 IOPS, 31.02 MiB/s [2024-10-28T04:14:32.770Z] 8054.00 IOPS, 31.46 MiB/s [2024-10-28T04:14:33.711Z] 8118.67 IOPS, 31.71 MiB/s [2024-10-28T04:14:35.088Z] 8135.75 IOPS, 31.78 MiB/s [2024-10-28T04:14:36.023Z] 8171.00 IOPS, 31.92 MiB/s [2024-10-28T04:14:36.958Z] 8187.83 IOPS, 31.98 MiB/s [2024-10-28T04:14:37.894Z] 8192.57 IOPS, 32.00 MiB/s [2024-10-28T04:14:38.829Z] 8194.25 IOPS, 32.01 MiB/s [2024-10-28T04:14:39.765Z] 8194.44 IOPS, 32.01 MiB/s [2024-10-28T04:14:40.024Z] 8197.40 IOPS, 32.02 MiB/s 00:38:49.428 Latency(us) 00:38:49.428 [2024-10-28T04:14:40.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:49.428 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:49.428 Verification LBA range: start 0x0 length 0x4000 00:38:49.428 NVMe0n1 : 10.13 8190.70 31.99 0.00 0.00 123987.47 25304.41 73188.15 00:38:49.428 [2024-10-28T04:14:40.024Z] =================================================================================================================== 00:38:49.428 [2024-10-28T04:14:40.024Z] Total : 8190.70 31.99 0.00 0.00 123987.47 25304.41 73188.15 00:38:49.428 { 00:38:49.428 "results": [ 00:38:49.428 { 00:38:49.428 "job": "NVMe0n1", 00:38:49.428 "core_mask": "0x1", 00:38:49.428 "workload": "verify", 00:38:49.428 "status": "finished", 00:38:49.428 "verify_range": { 00:38:49.428 "start": 0, 00:38:49.428 "length": 16384 00:38:49.428 }, 00:38:49.428 "queue_depth": 1024, 00:38:49.428 "io_size": 4096, 00:38:49.428 "runtime": 10.132351, 00:38:49.428 "iops": 8190.695328260934, 00:38:49.428 "mibps": 31.994903626019273, 00:38:49.428 "io_failed": 0, 00:38:49.428 "io_timeout": 0, 00:38:49.428 "avg_latency_us": 123987.47412406922, 00:38:49.428 "min_latency_us": 25304.41433079636, 00:38:49.428 "max_latency_us": 73188.15221830332 00:38:49.428 } 00:38:49.428 ], 00:38:49.428 "core_count": 1 00:38:49.428 } 00:38:49.428 05:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2512441 00:38:49.428 05:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2512441 ']' 00:38:49.428 05:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2512441 00:38:49.428 05:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:38:49.428 05:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:49.428 05:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2512441 00:38:49.428 05:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:49.428 05:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:49.428 05:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2512441' 00:38:49.428 killing process with pid 2512441 00:38:49.428 05:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2512441 00:38:49.428 Received shutdown signal, test time was about 10.000000 seconds 00:38:49.428 00:38:49.428 Latency(us) 00:38:49.428 [2024-10-28T04:14:40.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:49.428 [2024-10-28T04:14:40.024Z] =================================================================================================================== 00:38:49.428 [2024-10-28T04:14:40.024Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:49.428 05:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2512441 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:49.686 rmmod nvme_tcp 00:38:49.686 rmmod nvme_fabrics 00:38:49.686 rmmod nvme_keyring 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 2512292 ']' 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 2512292 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2512292 ']' 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2512292 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2512292 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2512292' 00:38:49.686 killing process with pid 2512292 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2512292 00:38:49.686 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2512292 00:38:49.946 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:49.946 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:49.946 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:49.946 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:49.946 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:38:49.946 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:49.946 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:38:49.946 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:49.946 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:49.946 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:49.946 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:49.946 05:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:52.485 00:38:52.485 real 0m17.591s 00:38:52.485 user 0m24.078s 00:38:52.485 sys 0m3.522s 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:52.485 ************************************ 00:38:52.485 END TEST nvmf_queue_depth 00:38:52.485 ************************************ 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:52.485 ************************************ 00:38:52.485 START TEST nvmf_target_multipath 00:38:52.485 ************************************ 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:52.485 * Looking for test storage... 00:38:52.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # lcov --version 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:52.485 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:38:52.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.485 --rc genhtml_branch_coverage=1 00:38:52.485 --rc genhtml_function_coverage=1 00:38:52.485 --rc genhtml_legend=1 00:38:52.485 --rc geninfo_all_blocks=1 00:38:52.485 --rc geninfo_unexecuted_blocks=1 00:38:52.485 00:38:52.486 ' 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:38:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.486 --rc genhtml_branch_coverage=1 00:38:52.486 --rc genhtml_function_coverage=1 00:38:52.486 --rc genhtml_legend=1 00:38:52.486 --rc geninfo_all_blocks=1 00:38:52.486 --rc geninfo_unexecuted_blocks=1 00:38:52.486 00:38:52.486 ' 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:38:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.486 --rc genhtml_branch_coverage=1 00:38:52.486 --rc genhtml_function_coverage=1 00:38:52.486 --rc genhtml_legend=1 00:38:52.486 --rc geninfo_all_blocks=1 00:38:52.486 --rc geninfo_unexecuted_blocks=1 00:38:52.486 00:38:52.486 ' 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:38:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.486 --rc genhtml_branch_coverage=1 00:38:52.486 --rc genhtml_function_coverage=1 00:38:52.486 --rc genhtml_legend=1 00:38:52.486 --rc geninfo_all_blocks=1 00:38:52.486 --rc geninfo_unexecuted_blocks=1 00:38:52.486 00:38:52.486 ' 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.486 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:52.487 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:52.487 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:52.487 05:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:54.390 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:54.390 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:54.390 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:54.390 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:54.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:54.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:38:54.390 00:38:54.390 --- 10.0.0.2 ping statistics --- 00:38:54.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:54.390 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:54.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:54.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:38:54.390 00:38:54.390 --- 10.0.0.1 ping statistics --- 00:38:54.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:54.390 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:54.390 only one NIC for nvmf test 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:54.390 rmmod nvme_tcp 00:38:54.390 rmmod nvme_fabrics 00:38:54.390 rmmod nvme_keyring 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:54.390 05:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:56.924 05:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:56.924 00:38:56.924 real 0m4.490s 00:38:56.924 user 0m0.951s 00:38:56.924 sys 0m1.545s 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:56.924 ************************************ 00:38:56.924 END TEST nvmf_target_multipath 00:38:56.924 ************************************ 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:56.924 ************************************ 00:38:56.924 START TEST nvmf_zcopy 00:38:56.924 ************************************ 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:56.924 * Looking for test storage... 00:38:56.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1689 -- # lcov --version 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:38:56.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.924 --rc genhtml_branch_coverage=1 00:38:56.924 --rc genhtml_function_coverage=1 00:38:56.924 --rc genhtml_legend=1 00:38:56.924 --rc geninfo_all_blocks=1 00:38:56.924 --rc geninfo_unexecuted_blocks=1 00:38:56.924 00:38:56.924 ' 00:38:56.924 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:38:56.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.924 --rc genhtml_branch_coverage=1 00:38:56.924 --rc genhtml_function_coverage=1 00:38:56.924 --rc genhtml_legend=1 00:38:56.924 --rc geninfo_all_blocks=1 00:38:56.924 --rc geninfo_unexecuted_blocks=1 00:38:56.924 00:38:56.924 ' 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:38:56.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.925 --rc genhtml_branch_coverage=1 00:38:56.925 --rc genhtml_function_coverage=1 00:38:56.925 --rc genhtml_legend=1 00:38:56.925 --rc geninfo_all_blocks=1 00:38:56.925 --rc geninfo_unexecuted_blocks=1 00:38:56.925 00:38:56.925 ' 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:38:56.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.925 --rc genhtml_branch_coverage=1 00:38:56.925 --rc genhtml_function_coverage=1 00:38:56.925 --rc genhtml_legend=1 00:38:56.925 --rc geninfo_all_blocks=1 00:38:56.925 --rc geninfo_unexecuted_blocks=1 00:38:56.925 00:38:56.925 ' 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:56.925 05:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:58.828 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:58.828 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:38:58.828 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:58.829 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:58.829 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:58.829 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:58.829 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:58.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:58.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:38:58.829 00:38:58.829 --- 10.0.0.2 ping statistics --- 00:38:58.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:58.829 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:58.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:58.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:38:58.829 00:38:58.829 --- 10.0.0.1 ping statistics --- 00:38:58.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:58.829 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:38:58.829 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=2517593 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 2517593 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2517593 ']' 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:58.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:58.830 05:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:58.830 [2024-10-28 05:14:49.297144] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:58.830 [2024-10-28 05:14:49.298232] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:38:58.830 [2024-10-28 05:14:49.298289] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:59.088 [2024-10-28 05:14:49.436542] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:59.088 [2024-10-28 05:14:49.479209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.088 [2024-10-28 05:14:49.527100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:59.088 [2024-10-28 05:14:49.527168] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:59.088 [2024-10-28 05:14:49.527184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:59.088 [2024-10-28 05:14:49.527198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:59.088 [2024-10-28 05:14:49.527209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:59.088 [2024-10-28 05:14:49.527884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:59.088 [2024-10-28 05:14:49.621754] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:59.088 [2024-10-28 05:14:49.622108] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:00.021 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:00.021 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:39:00.021 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:00.021 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:00.021 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:00.021 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:00.021 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:00.021 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:00.021 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.021 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:00.021 [2024-10-28 05:14:50.336572] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:00.021 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.021 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:00.021 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.021 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:00.021 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:00.022 [2024-10-28 05:14:50.352691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:00.022 malloc0 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:00.022 { 00:39:00.022 "params": { 00:39:00.022 "name": "Nvme$subsystem", 00:39:00.022 "trtype": "$TEST_TRANSPORT", 00:39:00.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:00.022 "adrfam": "ipv4", 00:39:00.022 "trsvcid": "$NVMF_PORT", 00:39:00.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:00.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:00.022 "hdgst": ${hdgst:-false}, 00:39:00.022 "ddgst": ${ddgst:-false} 00:39:00.022 }, 00:39:00.022 "method": "bdev_nvme_attach_controller" 00:39:00.022 } 00:39:00.022 EOF 00:39:00.022 )") 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:39:00.022 05:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:39:00.022 "params": { 00:39:00.022 "name": "Nvme1", 00:39:00.022 "trtype": "tcp", 00:39:00.022 "traddr": "10.0.0.2", 00:39:00.022 "adrfam": "ipv4", 00:39:00.022 "trsvcid": "4420", 00:39:00.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:00.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:00.022 "hdgst": false, 00:39:00.022 "ddgst": false 00:39:00.022 }, 00:39:00.022 "method": "bdev_nvme_attach_controller" 00:39:00.022 }' 00:39:00.022 [2024-10-28 05:14:50.442565] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:39:00.022 [2024-10-28 05:14:50.442672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517741 ] 00:39:00.022 [2024-10-28 05:14:50.581308] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:00.280 [2024-10-28 05:14:50.618198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:00.280 [2024-10-28 05:14:50.667331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:00.539 Running I/O for 10 seconds... 00:39:02.848 5139.00 IOPS, 40.15 MiB/s [2024-10-28T04:14:54.379Z] 5193.50 IOPS, 40.57 MiB/s [2024-10-28T04:14:55.314Z] 5202.00 IOPS, 40.64 MiB/s [2024-10-28T04:14:56.250Z] 5202.00 IOPS, 40.64 MiB/s [2024-10-28T04:14:57.185Z] 5208.40 IOPS, 40.69 MiB/s [2024-10-28T04:14:58.119Z] 5231.17 IOPS, 40.87 MiB/s [2024-10-28T04:14:59.052Z] 5284.57 IOPS, 41.29 MiB/s [2024-10-28T04:15:00.425Z] 5275.50 IOPS, 41.21 MiB/s [2024-10-28T04:15:01.359Z] 5270.78 IOPS, 41.18 MiB/s [2024-10-28T04:15:01.359Z] 5262.10 IOPS, 41.11 MiB/s 00:39:10.763 Latency(us) 00:39:10.763 [2024-10-28T04:15:01.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:10.763 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:10.763 Verification LBA range: start 0x0 length 0x1000 00:39:10.763 Nvme1n1 : 10.01 5265.20 41.13 0.00 0.00 24243.15 723.85 32701.09 00:39:10.763 [2024-10-28T04:15:01.359Z] =================================================================================================================== 00:39:10.763 [2024-10-28T04:15:01.359Z] Total : 5265.20 41.13 0.00 0.00 24243.15 723.85 32701.09 00:39:10.763 05:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2519031 00:39:10.763 05:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:10.763 05:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:10.763 05:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:10.763 05:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:10.763 05:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:39:10.763 05:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:39:10.763 05:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:10.763 05:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:10.763 { 00:39:10.763 "params": { 00:39:10.763 "name": "Nvme$subsystem", 00:39:10.763 "trtype": "$TEST_TRANSPORT", 00:39:10.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:10.763 "adrfam": "ipv4", 00:39:10.763 "trsvcid": "$NVMF_PORT", 00:39:10.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:10.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:10.763 "hdgst": ${hdgst:-false}, 00:39:10.763 "ddgst": ${ddgst:-false} 00:39:10.763 }, 00:39:10.763 "method": "bdev_nvme_attach_controller" 00:39:10.763 } 00:39:10.763 EOF 00:39:10.763 )") 00:39:10.763 [2024-10-28 05:15:01.220517] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.220566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 05:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:39:10.764 05:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:39:10.764 05:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:39:10.764 05:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:39:10.764 "params": { 00:39:10.764 "name": "Nvme1", 00:39:10.764 "trtype": "tcp", 00:39:10.764 "traddr": "10.0.0.2", 00:39:10.764 "adrfam": "ipv4", 00:39:10.764 "trsvcid": "4420", 00:39:10.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:10.764 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:10.764 "hdgst": false, 00:39:10.764 "ddgst": false 00:39:10.764 }, 00:39:10.764 "method": "bdev_nvme_attach_controller" 00:39:10.764 }' 00:39:10.764 [2024-10-28 05:15:01.228430] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.228458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 [2024-10-28 05:15:01.236422] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.236445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 [2024-10-28 05:15:01.244413] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.244433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 [2024-10-28 05:15:01.252408] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.252428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 [2024-10-28 05:15:01.260408] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.260428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 [2024-10-28 05:15:01.263096] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:39:10.764 [2024-10-28 05:15:01.263168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2519031 ] 00:39:10.764 [2024-10-28 05:15:01.268408] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.268427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 [2024-10-28 05:15:01.276408] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.276426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 [2024-10-28 05:15:01.284410] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.284430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 [2024-10-28 05:15:01.292410] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.292430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 [2024-10-28 05:15:01.300428] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.300452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 [2024-10-28 05:15:01.308427] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.308450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 [2024-10-28 05:15:01.316427] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.316450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 [2024-10-28 05:15:01.324424] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.324448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 [2024-10-28 05:15:01.332426] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.332449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 [2024-10-28 05:15:01.340425] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.340448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 [2024-10-28 05:15:01.348425] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.348448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:10.764 [2024-10-28 05:15:01.356425] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:10.764 [2024-10-28 05:15:01.356447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.364425] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.364449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.372425] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.372447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.380424] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.380447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.388424] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.388447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.396425] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.396448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.397602] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:11.023 [2024-10-28 05:15:01.404426] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.404450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.412430] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.412454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.420427] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.420450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.428427] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.428450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.436427] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.436450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.438215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.023 [2024-10-28 05:15:01.444447] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.444478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.452466] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.452509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.460436] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.460461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.468428] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.468453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.476429] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.476453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.484438] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.484466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.490988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:11.023 [2024-10-28 05:15:01.492429] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.492454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.500429] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.500454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.508462] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.508501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.516477] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.516525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.524471] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.524514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.532469] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.532513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.023 [2024-10-28 05:15:01.540467] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.023 [2024-10-28 05:15:01.540510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.024 [2024-10-28 05:15:01.548469] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.024 [2024-10-28 05:15:01.548510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.024 [2024-10-28 05:15:01.556459] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.024 [2024-10-28 05:15:01.556500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.024 [2024-10-28 05:15:01.564430] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.024 [2024-10-28 05:15:01.564455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.024 [2024-10-28 05:15:01.572464] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.024 [2024-10-28 05:15:01.572505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.024 [2024-10-28 05:15:01.580466] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.024 [2024-10-28 05:15:01.580510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.024 [2024-10-28 05:15:01.588444] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.024 [2024-10-28 05:15:01.588475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.024 [2024-10-28 05:15:01.596429] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.024 [2024-10-28 05:15:01.596453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.024 [2024-10-28 05:15:01.604437] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.024 [2024-10-28 05:15:01.604465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.024 [2024-10-28 05:15:01.612437] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.024 [2024-10-28 05:15:01.612465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.282 [2024-10-28 05:15:01.620434] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.282 [2024-10-28 05:15:01.620460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.282 [2024-10-28 05:15:01.628443] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.282 [2024-10-28 05:15:01.628470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.282 [2024-10-28 05:15:01.636436] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.282 [2024-10-28 05:15:01.636463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.282 [2024-10-28 05:15:01.644434] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.282 [2024-10-28 05:15:01.644461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.282 [2024-10-28 05:15:01.652438] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.652465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.660523] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.660553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.668434] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.668462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 Running I/O for 5 seconds... 00:39:11.283 [2024-10-28 05:15:01.683073] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.683105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.694097] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.694128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.710291] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.710322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.724714] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.724742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.735686] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.735714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.750019] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.750050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.762394] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.762424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.774131] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.774161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.787717] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.787745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.800049] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.800076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.810242] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.810268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.822574] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.822600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.834061] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.834091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.850838] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.850865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.861426] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.861454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.283 [2024-10-28 05:15:01.875104] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.283 [2024-10-28 05:15:01.875131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:01.889643] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:01.889687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:01.900084] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:01.900113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:01.913703] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:01.913730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:01.931065] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:01.931093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:01.942865] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:01.942893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:01.957745] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:01.957772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:01.967684] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:01.967711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:01.981642] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:01.981687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:01.998361] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:01.998401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:02.014151] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:02.014182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:02.024806] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:02.024832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:02.037922] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:02.037955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:02.050267] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:02.050297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:02.061742] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:02.061768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:02.073620] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:02.073678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:02.085720] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:02.085748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.541 [2024-10-28 05:15:02.097897] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.541 [2024-10-28 05:15:02.097951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.542 [2024-10-28 05:15:02.109965] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.542 [2024-10-28 05:15:02.109994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.542 [2024-10-28 05:15:02.121584] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.542 [2024-10-28 05:15:02.121614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.542 [2024-10-28 05:15:02.133699] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.542 [2024-10-28 05:15:02.133727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.146060] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.146091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.158455] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.158485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.170815] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.170842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.185055] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.185086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.195386] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.195415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.208550] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.208579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.221042] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.221072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.238333] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.238370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.249249] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.249278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.262867] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.262894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.279453] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.279483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.290297] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.290327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.306311] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.306340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.322369] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.322410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.338312] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.338342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.348512] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.348541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.360148] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.360177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.372128] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.372158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:11.800 [2024-10-28 05:15:02.384310] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:11.800 [2024-10-28 05:15:02.384342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.396285] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.396314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.408161] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.408190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.420392] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.420423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.432272] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.432301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.444501] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.444531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.456967] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.457010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.467970] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.467999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.480519] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.480556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.492688] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.492716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.504660] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.504702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.516902] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.516946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.533705] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.533730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.544642] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.544672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.557250] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.557280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.574325] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.574355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.584569] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.584599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.596528] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.596558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.608236] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.608265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.620125] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.620155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.631797] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.631823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.059 [2024-10-28 05:15:02.644064] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.059 [2024-10-28 05:15:02.644094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.655770] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.655796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.668000] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.668029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 10458.00 IOPS, 81.70 MiB/s [2024-10-28T04:15:02.914Z] [2024-10-28 05:15:02.680329] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.680361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.692080] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.692109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.704033] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.704062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.716486] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.716516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.728395] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.728424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.740546] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.740577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.753050] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.753085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.764054] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.764083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.776090] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.776120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.788345] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.788375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.800129] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.800159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.813019] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.813048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.823821] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.823848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.837613] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.837652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.853961] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.853990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.865229] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.865258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.878732] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.878759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.893977] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.894007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.318 [2024-10-28 05:15:02.903599] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.318 [2024-10-28 05:15:02.903628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:02.916823] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:02.916850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:02.933786] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:02.933813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:02.945379] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:02.945409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:02.962406] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:02.962436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:02.976166] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:02.976197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:02.986939] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:02.986979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:02.999859] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:02.999886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:03.012089] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:03.012130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:03.024023] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:03.024053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:03.036553] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:03.036584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:03.048519] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:03.048550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:03.060623] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:03.060679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:03.072826] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:03.072853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:03.084295] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:03.084325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:03.096005] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:03.096035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:03.107910] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:03.107951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:03.119938] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:03.119968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:03.131565] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:03.131596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:03.143357] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:03.143388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:03.156454] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:03.156484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.578 [2024-10-28 05:15:03.168476] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.578 [2024-10-28 05:15:03.168506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.837 [2024-10-28 05:15:03.180461] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.837 [2024-10-28 05:15:03.180490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.837 [2024-10-28 05:15:03.192645] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.837 [2024-10-28 05:15:03.192674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.837 [2024-10-28 05:15:03.204610] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.837 [2024-10-28 05:15:03.204649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.837 [2024-10-28 05:15:03.216453] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.837 [2024-10-28 05:15:03.216483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.837 [2024-10-28 05:15:03.228825] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.837 [2024-10-28 05:15:03.228851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.837 [2024-10-28 05:15:03.240565] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.837 [2024-10-28 05:15:03.240594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.837 [2024-10-28 05:15:03.252404] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.837 [2024-10-28 05:15:03.252433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.837 [2024-10-28 05:15:03.264865] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.837 [2024-10-28 05:15:03.264892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.837 [2024-10-28 05:15:03.275938] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.837 [2024-10-28 05:15:03.275969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.837 [2024-10-28 05:15:03.289247] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.837 [2024-10-28 05:15:03.289278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.837 [2024-10-28 05:15:03.306328] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.837 [2024-10-28 05:15:03.306357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.837 [2024-10-28 05:15:03.316905] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.837 [2024-10-28 05:15:03.316949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.837 [2024-10-28 05:15:03.329948] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.837 [2024-10-28 05:15:03.329978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.838 [2024-10-28 05:15:03.342276] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.838 [2024-10-28 05:15:03.342307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.838 [2024-10-28 05:15:03.354327] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.838 [2024-10-28 05:15:03.354357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.838 [2024-10-28 05:15:03.366341] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.838 [2024-10-28 05:15:03.366371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.838 [2024-10-28 05:15:03.382976] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.838 [2024-10-28 05:15:03.383001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.838 [2024-10-28 05:15:03.395962] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.838 [2024-10-28 05:15:03.395992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.838 [2024-10-28 05:15:03.406385] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.838 [2024-10-28 05:15:03.406415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:12.838 [2024-10-28 05:15:03.420041] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:12.838 [2024-10-28 05:15:03.420080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.096 [2024-10-28 05:15:03.433831] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.096 [2024-10-28 05:15:03.433857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.096 [2024-10-28 05:15:03.444241] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.096 [2024-10-28 05:15:03.444271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.096 [2024-10-28 05:15:03.456998] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.096 [2024-10-28 05:15:03.457027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.096 [2024-10-28 05:15:03.473847] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.096 [2024-10-28 05:15:03.473873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.096 [2024-10-28 05:15:03.484750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.096 [2024-10-28 05:15:03.484776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.096 [2024-10-28 05:15:03.497861] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.096 [2024-10-28 05:15:03.497887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.096 [2024-10-28 05:15:03.509884] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.096 [2024-10-28 05:15:03.509925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.096 [2024-10-28 05:15:03.521469] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.096 [2024-10-28 05:15:03.521500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.096 [2024-10-28 05:15:03.533546] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.096 [2024-10-28 05:15:03.533576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.096 [2024-10-28 05:15:03.545521] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.096 [2024-10-28 05:15:03.545551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.096 [2024-10-28 05:15:03.562904] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.096 [2024-10-28 05:15:03.562948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.096 [2024-10-28 05:15:03.573952] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.096 [2024-10-28 05:15:03.573981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.096 [2024-10-28 05:15:03.586703] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.096 [2024-10-28 05:15:03.586730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.096 [2024-10-28 05:15:03.602498] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.096 [2024-10-28 05:15:03.602528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.096 [2024-10-28 05:15:03.619247] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.096 [2024-10-28 05:15:03.619277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.096 [2024-10-28 05:15:03.629850] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.097 [2024-10-28 05:15:03.629891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.097 [2024-10-28 05:15:03.642812] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.097 [2024-10-28 05:15:03.642838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.097 [2024-10-28 05:15:03.657074] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.097 [2024-10-28 05:15:03.657103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.097 [2024-10-28 05:15:03.667536] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.097 [2024-10-28 05:15:03.667575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.097 10489.50 IOPS, 81.95 MiB/s [2024-10-28T04:15:03.693Z] [2024-10-28 05:15:03.680355] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.097 [2024-10-28 05:15:03.680384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.692388] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.692417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.704179] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.704208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.716536] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.716565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.728567] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.728596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.740784] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.740811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.753282] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.753312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.769816] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.769843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.781371] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.781401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.793192] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.793221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.805312] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.805342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.817826] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.817853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.829983] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.830014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.842024] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.842068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.858134] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.858164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.875190] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.875220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.886162] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.886192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.902261] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.355 [2024-10-28 05:15:03.902292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.355 [2024-10-28 05:15:03.916009] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.356 [2024-10-28 05:15:03.916046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.356 [2024-10-28 05:15:03.926600] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.356 [2024-10-28 05:15:03.926629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.356 [2024-10-28 05:15:03.939820] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.356 [2024-10-28 05:15:03.939846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.614 [2024-10-28 05:15:03.951830] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.614 [2024-10-28 05:15:03.951857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.614 [2024-10-28 05:15:03.963684] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.614 [2024-10-28 05:15:03.963712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.614 [2024-10-28 05:15:03.975754] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.614 [2024-10-28 05:15:03.975780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.614 [2024-10-28 05:15:03.987704] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.614 [2024-10-28 05:15:03.987730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.614 [2024-10-28 05:15:04.001947] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.614 [2024-10-28 05:15:04.001977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.614 [2024-10-28 05:15:04.012052] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.614 [2024-10-28 05:15:04.012082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.615 [2024-10-28 05:15:04.024805] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.615 [2024-10-28 05:15:04.024831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.615 [2024-10-28 05:15:04.037040] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.615 [2024-10-28 05:15:04.037070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.615 [2024-10-28 05:15:04.048628] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.615 [2024-10-28 05:15:04.048667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.615 [2024-10-28 05:15:04.060748] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.615 [2024-10-28 05:15:04.060775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.615 [2024-10-28 05:15:04.077753] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.615 [2024-10-28 05:15:04.077781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.615 [2024-10-28 05:15:04.089538] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.615 [2024-10-28 05:15:04.089568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.615 [2024-10-28 05:15:04.106126] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.615 [2024-10-28 05:15:04.106156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.615 [2024-10-28 05:15:04.119531] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.615 [2024-10-28 05:15:04.119562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.615 [2024-10-28 05:15:04.129956] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.615 [2024-10-28 05:15:04.129986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.615 [2024-10-28 05:15:04.145874] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.615 [2024-10-28 05:15:04.145901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.615 [2024-10-28 05:15:04.157478] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.615 [2024-10-28 05:15:04.157516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.615 [2024-10-28 05:15:04.169451] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.615 [2024-10-28 05:15:04.169479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.615 [2024-10-28 05:15:04.181837] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.615 [2024-10-28 05:15:04.181863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.615 [2024-10-28 05:15:04.193798] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.615 [2024-10-28 05:15:04.193826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.615 [2024-10-28 05:15:04.205523] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.615 [2024-10-28 05:15:04.205551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.873 [2024-10-28 05:15:04.223108] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.223138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.234256] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.234284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.250397] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.250427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.264329] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.264361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.274987] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.275016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.288131] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.288161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.300164] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.300193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.312404] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.312434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.324451] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.324480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.336620] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.336660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.349072] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.349102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.359828] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.359855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.372771] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.372797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.384749] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.384775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.396389] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.396418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.407777] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.407803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.420180] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.420210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.432146] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.432175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.444079] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.444109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:13.874 [2024-10-28 05:15:04.456581] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:13.874 [2024-10-28 05:15:04.456610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.468694] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.468721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.480322] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.480353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.492450] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.492479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.504617] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.504655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.522395] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.522425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.538202] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.538232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.553525] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.553555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.563994] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.564023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.577019] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.577049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.587611] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.587651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.600184] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.600213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.612353] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.612382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.624417] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.624447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.636322] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.636355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.648606] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.648644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.660616] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.660662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 [2024-10-28 05:15:04.672708] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.672734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.132 10508.00 IOPS, 82.09 MiB/s [2024-10-28T04:15:04.728Z] [2024-10-28 05:15:04.683500] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.132 [2024-10-28 05:15:04.683530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.133 [2024-10-28 05:15:04.696295] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.133 [2024-10-28 05:15:04.696324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.133 [2024-10-28 05:15:04.707944] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.133 [2024-10-28 05:15:04.707973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.133 [2024-10-28 05:15:04.719650] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.133 [2024-10-28 05:15:04.719692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.731708] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.731736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.743373] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.743402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.755778] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.755805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.770091] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.770120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.781011] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.781040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.793849] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.793876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.805784] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.805810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.817765] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.817791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.829161] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.829191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.840750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.840775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.852463] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.852501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.864250] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.864279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.876101] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.876131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.887926] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.887956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.899796] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.899823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.911685] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.911710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.923287] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.923318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.937128] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.937158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.947476] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.947506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.960723] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.960749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.973043] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.973072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.408 [2024-10-28 05:15:04.984047] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.408 [2024-10-28 05:15:04.984077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:04.996143] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:04.996169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.007555] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.007581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.020782] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.020809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.030852] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.030878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.044172] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.044202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.055652] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.055695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.067652] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.067694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.079619] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.079671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.091722] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.091749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.103622] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.103661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.115337] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.115368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.128901] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.128928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.138928] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.138958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.151959] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.151988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.164171] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.164202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.176545] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.176576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.189111] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.189140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.201334] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.201364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.212837] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.212863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.224877] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.224903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.235883] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.235910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.249474] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.249505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.261139] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.261179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.271482] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.271511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.284987] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.285030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.711 [2024-10-28 05:15:05.295138] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.711 [2024-10-28 05:15:05.295164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.973 [2024-10-28 05:15:05.306792] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.973 [2024-10-28 05:15:05.306826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.973 [2024-10-28 05:15:05.322818] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.973 [2024-10-28 05:15:05.322845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.973 [2024-10-28 05:15:05.336013] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.973 [2024-10-28 05:15:05.336054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.973 [2024-10-28 05:15:05.346688] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.973 [2024-10-28 05:15:05.346715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.973 [2024-10-28 05:15:05.360780] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.973 [2024-10-28 05:15:05.360806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.973 [2024-10-28 05:15:05.370960] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.973 [2024-10-28 05:15:05.370990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.973 [2024-10-28 05:15:05.384221] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.973 [2024-10-28 05:15:05.384251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.973 [2024-10-28 05:15:05.396407] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.973 [2024-10-28 05:15:05.396437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.973 [2024-10-28 05:15:05.408656] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.973 [2024-10-28 05:15:05.408706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.973 [2024-10-28 05:15:05.420509] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.973 [2024-10-28 05:15:05.420539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.973 [2024-10-28 05:15:05.432949] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.973 [2024-10-28 05:15:05.432979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.973 [2024-10-28 05:15:05.450932] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.973 [2024-10-28 05:15:05.450957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.973 [2024-10-28 05:15:05.462882] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.973 [2024-10-28 05:15:05.462923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.973 [2024-10-28 05:15:05.477650] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.973 [2024-10-28 05:15:05.477693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.973 [2024-10-28 05:15:05.488008] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.974 [2024-10-28 05:15:05.488037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.974 [2024-10-28 05:15:05.501305] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.974 [2024-10-28 05:15:05.501335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.974 [2024-10-28 05:15:05.513054] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.974 [2024-10-28 05:15:05.513085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.974 [2024-10-28 05:15:05.524199] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.974 [2024-10-28 05:15:05.524229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.974 [2024-10-28 05:15:05.535785] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.974 [2024-10-28 05:15:05.535812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.974 [2024-10-28 05:15:05.548036] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.974 [2024-10-28 05:15:05.548079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.974 [2024-10-28 05:15:05.559963] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.974 [2024-10-28 05:15:05.559993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.572782] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.572809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.590362] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.590391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.602319] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.602350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.615072] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.615103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.627215] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.627246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.641343] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.641373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.652215] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.652244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.665903] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.665958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 10544.50 IOPS, 82.38 MiB/s [2024-10-28T04:15:05.829Z] [2024-10-28 05:15:05.677764] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.677790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.694943] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.694973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.705280] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.705309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.718251] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.718281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.732876] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.732903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.743480] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.743510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.756182] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.756211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.768134] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.768163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.780792] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.780819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.792585] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.792614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.805161] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.805190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.233 [2024-10-28 05:15:05.822118] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.233 [2024-10-28 05:15:05.822148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:05.836044] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:05.836075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:05.847054] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:05.847083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:05.860334] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:05.860366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:05.872478] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:05.872508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:05.884483] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:05.884513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:05.896394] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:05.896423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:05.908815] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:05.908842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:05.925751] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:05.925777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:05.936663] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:05.936705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:05.949852] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:05.949877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:05.961461] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:05.961490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:05.973521] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:05.973550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:05.985268] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:05.985297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:05.997416] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:05.997446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:06.014461] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:06.014491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:06.025169] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:06.025200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:06.038359] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:06.038388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:06.050906] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:06.050953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:06.065023] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:06.065053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.492 [2024-10-28 05:15:06.075195] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.492 [2024-10-28 05:15:06.075225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.088609] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.088647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.106435] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.106465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.121415] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.121445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.131816] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.131843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.144779] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.144805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.156515] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.156543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.167279] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.167307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.182266] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.182294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.191487] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.191514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.203839] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.203866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.217105] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.217132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.226456] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.226481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.239036] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.239062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.250041] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.250066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.261457] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.261482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.272334] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.272360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.283030] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.283056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.298441] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.298482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.307444] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.307470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.319351] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.319375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.332112] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.332140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.751 [2024-10-28 05:15:06.341712] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.751 [2024-10-28 05:15:06.341739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.353337] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.353362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.364338] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.364365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.375920] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.375961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.389761] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.389789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.399760] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.399787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.411539] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.411566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.422745] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.422773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.433413] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.433441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.444108] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.444135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.455244] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.455269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.466305] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.466333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.477208] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.477243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.488396] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.488423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.499534] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.499560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.512646] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.512673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.521992] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.522018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.533945] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.533972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.544927] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.544966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.554324] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.554351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.569214] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.569239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.579577] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.579603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.592284] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.010 [2024-10-28 05:15:06.592311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.010 [2024-10-28 05:15:06.601727] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.011 [2024-10-28 05:15:06.601754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.269 [2024-10-28 05:15:06.613686] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.269 [2024-10-28 05:15:06.613726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.269 [2024-10-28 05:15:06.624658] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.269 [2024-10-28 05:15:06.624684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.269 [2024-10-28 05:15:06.635506] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.269 [2024-10-28 05:15:06.635534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.269 [2024-10-28 05:15:06.648417] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.269 [2024-10-28 05:15:06.648444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.269 [2024-10-28 05:15:06.657800] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.269 [2024-10-28 05:15:06.657827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.269 [2024-10-28 05:15:06.670110] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.269 [2024-10-28 05:15:06.670135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.269 10641.40 IOPS, 83.14 MiB/s [2024-10-28T04:15:06.865Z] [2024-10-28 05:15:06.680760] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.269 [2024-10-28 05:15:06.680787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.269 00:39:16.269 Latency(us) 00:39:16.269 [2024-10-28T04:15:06.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:16.269 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:16.270 Nvme1n1 : 5.01 10645.29 83.17 0.00 0.00 12007.91 3309.04 19464.93 00:39:16.270 [2024-10-28T04:15:06.866Z] =================================================================================================================== 00:39:16.270 [2024-10-28T04:15:06.866Z] Total : 10645.29 83.17 0.00 0.00 12007.91 3309.04 19464.93 00:39:16.270 [2024-10-28 05:15:06.688420] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.688444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.696440] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.696465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.704457] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.704497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.712490] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.712542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.720503] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.720555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.728489] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.728539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.736489] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.736539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.744480] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.744530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.752496] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.752548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.760485] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.760531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.768487] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.768539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.776493] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.776545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.784491] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.784544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.792497] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.792550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.800486] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.800535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.808487] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.808538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.816486] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.816537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.824486] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.824536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.832437] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.832466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.840450] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.840484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.848490] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.848543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.270 [2024-10-28 05:15:06.856483] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.270 [2024-10-28 05:15:06.856534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.560 [2024-10-28 05:15:06.864476] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.560 [2024-10-28 05:15:06.864512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.560 [2024-10-28 05:15:06.872412] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.560 [2024-10-28 05:15:06.872432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.560 [2024-10-28 05:15:06.880410] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.560 [2024-10-28 05:15:06.880429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.560 [2024-10-28 05:15:06.888417] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.560 [2024-10-28 05:15:06.888440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2519031) - No such process 00:39:16.560 05:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2519031 00:39:16.560 05:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.560 05:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.560 05:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:16.560 05:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.560 05:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:16.560 05:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.560 05:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:16.560 delay0 00:39:16.560 05:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.560 05:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:16.560 05:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.560 05:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:16.560 05:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.560 05:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:16.560 [2024-10-28 05:15:07.115734] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:24.675 Initializing NVMe Controllers 00:39:24.675 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:24.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:24.675 Initialization complete. Launching workers. 00:39:24.675 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 216, failed: 22564 00:39:24.675 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22625, failed to submit 155 00:39:24.675 success 22565, unsuccessful 60, failed 0 00:39:24.675 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:24.675 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:24.675 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:24.675 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:24.675 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:24.675 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:24.675 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:24.675 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:24.675 rmmod nvme_tcp 00:39:24.675 rmmod nvme_fabrics 00:39:24.675 rmmod nvme_keyring 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 2517593 ']' 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 2517593 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2517593 ']' 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2517593 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2517593 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2517593' 00:39:24.676 killing process with pid 2517593 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2517593 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2517593 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:24.676 05:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:26.050 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:26.050 00:39:26.050 real 0m29.511s 00:39:26.050 user 0m38.959s 00:39:26.050 sys 0m10.709s 00:39:26.050 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:26.050 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:26.050 ************************************ 00:39:26.050 END TEST nvmf_zcopy 00:39:26.050 ************************************ 00:39:26.050 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:26.050 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:26.050 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:26.050 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:26.050 ************************************ 00:39:26.050 START TEST nvmf_nmic 00:39:26.050 ************************************ 00:39:26.050 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:26.310 * Looking for test storage... 00:39:26.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1689 -- # lcov --version 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:39:26.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.310 --rc genhtml_branch_coverage=1 00:39:26.310 --rc genhtml_function_coverage=1 00:39:26.310 --rc genhtml_legend=1 00:39:26.310 --rc geninfo_all_blocks=1 00:39:26.310 --rc geninfo_unexecuted_blocks=1 00:39:26.310 00:39:26.310 ' 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:39:26.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.310 --rc genhtml_branch_coverage=1 00:39:26.310 --rc genhtml_function_coverage=1 00:39:26.310 --rc genhtml_legend=1 00:39:26.310 --rc geninfo_all_blocks=1 00:39:26.310 --rc geninfo_unexecuted_blocks=1 00:39:26.310 00:39:26.310 ' 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:39:26.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.310 --rc genhtml_branch_coverage=1 00:39:26.310 --rc genhtml_function_coverage=1 00:39:26.310 --rc genhtml_legend=1 00:39:26.310 --rc geninfo_all_blocks=1 00:39:26.310 --rc geninfo_unexecuted_blocks=1 00:39:26.310 00:39:26.310 ' 00:39:26.310 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:39:26.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.310 --rc genhtml_branch_coverage=1 00:39:26.310 --rc genhtml_function_coverage=1 00:39:26.310 --rc genhtml_legend=1 00:39:26.310 --rc geninfo_all_blocks=1 00:39:26.310 --rc geninfo_unexecuted_blocks=1 00:39:26.311 00:39:26.311 ' 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:26.311 05:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:28.214 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:28.214 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:28.214 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:28.215 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:28.215 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:28.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:28.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:39:28.215 00:39:28.215 --- 10.0.0.2 ping statistics --- 00:39:28.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:28.215 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:28.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:28.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:39:28.215 00:39:28.215 --- 10.0.0.1 ping statistics --- 00:39:28.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:28.215 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=2522920 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 2522920 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2522920 ']' 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:28.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:28.215 05:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:28.474 [2024-10-28 05:15:18.833871] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:28.474 [2024-10-28 05:15:18.835163] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:39:28.474 [2024-10-28 05:15:18.835223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:28.474 [2024-10-28 05:15:18.988570] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:28.474 [2024-10-28 05:15:19.029795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:28.733 [2024-10-28 05:15:19.083918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:28.733 [2024-10-28 05:15:19.083973] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:28.733 [2024-10-28 05:15:19.083989] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:28.733 [2024-10-28 05:15:19.084002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:28.733 [2024-10-28 05:15:19.084013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:28.733 [2024-10-28 05:15:19.085674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:28.733 [2024-10-28 05:15:19.085728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:28.733 [2024-10-28 05:15:19.085786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:28.733 [2024-10-28 05:15:19.085790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:28.733 [2024-10-28 05:15:19.177489] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:28.733 [2024-10-28 05:15:19.177723] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:28.733 [2024-10-28 05:15:19.178019] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:28.733 [2024-10-28 05:15:19.178605] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:28.733 [2024-10-28 05:15:19.178887] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:29.300 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:29.300 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:39:29.300 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:29.300 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:29.300 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:29.300 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:29.300 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:29.300 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.300 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:29.300 [2024-10-28 05:15:19.866551] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:29.300 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.300 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:29.300 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.300 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:29.558 Malloc0 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:29.558 [2024-10-28 05:15:19.930706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:29.558 test case1: single bdev can't be used in multiple subsystems 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:29.558 [2024-10-28 05:15:19.954414] bdev.c:8192:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:29.558 [2024-10-28 05:15:19.954443] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:29.558 [2024-10-28 05:15:19.954457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:29.558 request: 00:39:29.558 { 00:39:29.558 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:29.558 "namespace": { 00:39:29.558 "bdev_name": "Malloc0", 00:39:29.558 "no_auto_visible": false 00:39:29.558 }, 00:39:29.558 "method": "nvmf_subsystem_add_ns", 00:39:29.558 "req_id": 1 00:39:29.558 } 00:39:29.558 Got JSON-RPC error response 00:39:29.558 response: 00:39:29.558 { 00:39:29.558 "code": -32602, 00:39:29.558 "message": "Invalid parameters" 00:39:29.558 } 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:29.558 Adding namespace failed - expected result. 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:29.558 test case2: host connect to nvmf target in multiple paths 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:29.558 [2024-10-28 05:15:19.962510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.558 05:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:29.558 05:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:29.817 05:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:29.817 05:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:39:29.817 05:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:29.817 05:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:39:29.817 05:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:39:31.716 05:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:31.973 05:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:31.973 05:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:31.973 05:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:39:31.973 05:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:31.973 05:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:39:31.974 05:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:31.974 [global] 00:39:31.974 thread=1 00:39:31.974 invalidate=1 00:39:31.974 rw=write 00:39:31.974 time_based=1 00:39:31.974 runtime=1 00:39:31.974 ioengine=libaio 00:39:31.974 direct=1 00:39:31.974 bs=4096 00:39:31.974 iodepth=1 00:39:31.974 norandommap=0 00:39:31.974 numjobs=1 00:39:31.974 00:39:31.974 verify_dump=1 00:39:31.974 verify_backlog=512 00:39:31.974 verify_state_save=0 00:39:31.974 do_verify=1 00:39:31.974 verify=crc32c-intel 00:39:31.974 [job0] 00:39:31.974 filename=/dev/nvme0n1 00:39:31.974 Could not set queue depth (nvme0n1) 00:39:31.974 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:31.974 fio-3.35 00:39:31.974 Starting 1 thread 00:39:33.349 00:39:33.349 job0: (groupid=0, jobs=1): err= 0: pid=2523429: Mon Oct 28 05:15:23 2024 00:39:33.349 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:39:33.349 slat (nsec): min=7128, max=43076, avg=13096.51, stdev=5577.76 00:39:33.349 clat (usec): min=287, max=715, avg=324.43, stdev=23.82 00:39:33.349 lat (usec): min=295, max=727, avg=337.53, stdev=28.21 00:39:33.349 clat percentiles (usec): 00:39:33.349 | 1.00th=[ 293], 5.00th=[ 297], 10.00th=[ 297], 20.00th=[ 302], 00:39:33.349 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 330], 60.00th=[ 338], 00:39:33.349 | 70.00th=[ 338], 80.00th=[ 343], 90.00th=[ 351], 95.00th=[ 355], 00:39:33.349 | 99.00th=[ 367], 99.50th=[ 396], 99.90th=[ 461], 99.95th=[ 717], 00:39:33.349 | 99.99th=[ 717] 00:39:33.349 write: IOPS=2028, BW=8116KiB/s (8311kB/s)(8124KiB/1001msec); 0 zone resets 00:39:33.349 slat (usec): min=9, max=28868, avg=31.07, stdev=640.83 00:39:33.349 clat (usec): min=166, max=438, avg=198.66, stdev=22.63 00:39:33.349 lat (usec): min=181, max=29115, avg=229.73, stdev=642.51 00:39:33.349 clat percentiles (usec): 00:39:33.349 | 1.00th=[ 176], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 180], 00:39:33.349 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 206], 00:39:33.349 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 223], 95.00th=[ 231], 00:39:33.349 | 99.00th=[ 281], 99.50th=[ 314], 99.90th=[ 351], 99.95th=[ 359], 00:39:33.349 | 99.99th=[ 437] 00:39:33.349 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:39:33.349 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:33.349 lat (usec) : 250=55.82%, 500=44.15%, 750=0.03% 00:39:33.349 cpu : usr=3.60%, sys=7.10%, ctx=3572, majf=0, minf=1 00:39:33.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.349 issued rwts: total=1536,2031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:33.349 00:39:33.349 Run status group 0 (all jobs): 00:39:33.349 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:39:33.349 WRITE: bw=8116KiB/s (8311kB/s), 8116KiB/s-8116KiB/s (8311kB/s-8311kB/s), io=8124KiB (8319kB), run=1001-1001msec 00:39:33.350 00:39:33.350 Disk stats (read/write): 00:39:33.350 nvme0n1: ios=1571/1536, merge=0/0, ticks=629/286, in_queue=915, util=98.60% 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:33.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:33.350 rmmod nvme_tcp 00:39:33.350 rmmod nvme_fabrics 00:39:33.350 rmmod nvme_keyring 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 2522920 ']' 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 2522920 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2522920 ']' 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2522920 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2522920 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2522920' 00:39:33.350 killing process with pid 2522920 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2522920 00:39:33.350 05:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2522920 00:39:33.609 05:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:33.609 05:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:33.609 05:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:33.609 05:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:33.609 05:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:39:33.609 05:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:33.609 05:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:39:33.609 05:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:33.609 05:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:33.609 05:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:33.609 05:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:33.609 05:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.145 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:36.145 00:39:36.145 real 0m9.546s 00:39:36.145 user 0m16.785s 00:39:36.145 sys 0m3.295s 00:39:36.145 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:36.145 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:36.145 ************************************ 00:39:36.145 END TEST nvmf_nmic 00:39:36.145 ************************************ 00:39:36.145 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:36.145 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:36.145 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:36.145 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:36.145 ************************************ 00:39:36.145 START TEST nvmf_fio_target 00:39:36.145 ************************************ 00:39:36.145 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:36.145 * Looking for test storage... 00:39:36.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1689 -- # lcov --version 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:39:36.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.146 --rc genhtml_branch_coverage=1 00:39:36.146 --rc genhtml_function_coverage=1 00:39:36.146 --rc genhtml_legend=1 00:39:36.146 --rc geninfo_all_blocks=1 00:39:36.146 --rc geninfo_unexecuted_blocks=1 00:39:36.146 00:39:36.146 ' 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:39:36.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.146 --rc genhtml_branch_coverage=1 00:39:36.146 --rc genhtml_function_coverage=1 00:39:36.146 --rc genhtml_legend=1 00:39:36.146 --rc geninfo_all_blocks=1 00:39:36.146 --rc geninfo_unexecuted_blocks=1 00:39:36.146 00:39:36.146 ' 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:39:36.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.146 --rc genhtml_branch_coverage=1 00:39:36.146 --rc genhtml_function_coverage=1 00:39:36.146 --rc genhtml_legend=1 00:39:36.146 --rc geninfo_all_blocks=1 00:39:36.146 --rc geninfo_unexecuted_blocks=1 00:39:36.146 00:39:36.146 ' 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:39:36.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.146 --rc genhtml_branch_coverage=1 00:39:36.146 --rc genhtml_function_coverage=1 00:39:36.146 --rc genhtml_legend=1 00:39:36.146 --rc geninfo_all_blocks=1 00:39:36.146 --rc geninfo_unexecuted_blocks=1 00:39:36.146 00:39:36.146 ' 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:36.146 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:36.147 05:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:38.049 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:38.049 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:38.050 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:38.050 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:38.050 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:38.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:38.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:39:38.050 00:39:38.050 --- 10.0.0.2 ping statistics --- 00:39:38.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:38.050 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:38.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:38.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:39:38.050 00:39:38.050 --- 10.0.0.1 ping statistics --- 00:39:38.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:38.050 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=2525586 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 2525586 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2525586 ']' 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:38.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:38.050 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:38.050 [2024-10-28 05:15:28.524887] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:38.050 [2024-10-28 05:15:28.526028] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:39:38.050 [2024-10-28 05:15:28.526095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:38.309 [2024-10-28 05:15:28.667509] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:38.309 [2024-10-28 05:15:28.709342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:38.309 [2024-10-28 05:15:28.760718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:38.309 [2024-10-28 05:15:28.760789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:38.309 [2024-10-28 05:15:28.760806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:38.309 [2024-10-28 05:15:28.760820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:38.309 [2024-10-28 05:15:28.760831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:38.309 [2024-10-28 05:15:28.762448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:38.310 [2024-10-28 05:15:28.762479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:38.310 [2024-10-28 05:15:28.762532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:38.310 [2024-10-28 05:15:28.762535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.310 [2024-10-28 05:15:28.852139] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:38.310 [2024-10-28 05:15:28.852390] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:38.310 [2024-10-28 05:15:28.852688] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:38.310 [2024-10-28 05:15:28.853239] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:38.310 [2024-10-28 05:15:28.853501] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:38.310 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:38.310 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:39:38.310 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:38.310 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:38.310 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:38.310 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:38.310 05:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:38.568 [2024-10-28 05:15:29.159308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:38.827 05:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:39.085 05:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:39.085 05:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:39.343 05:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:39.343 05:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:39.910 05:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:39.910 05:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:40.170 05:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:40.170 05:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:40.428 05:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:40.687 05:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:40.687 05:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:41.253 05:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:41.253 05:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:41.253 05:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:41.253 05:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:41.819 05:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:42.077 05:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:42.077 05:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:42.335 05:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:42.335 05:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:42.592 05:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:42.849 [2024-10-28 05:15:33.319437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:42.849 05:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:43.107 05:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:43.365 05:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:43.623 05:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:43.623 05:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:39:43.623 05:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:43.623 05:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:39:43.623 05:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:39:43.623 05:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:39:46.152 05:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:46.152 05:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:46.152 05:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:46.152 05:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:39:46.152 05:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:46.152 05:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:39:46.152 05:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:46.152 [global] 00:39:46.152 thread=1 00:39:46.152 invalidate=1 00:39:46.152 rw=write 00:39:46.152 time_based=1 00:39:46.152 runtime=1 00:39:46.152 ioengine=libaio 00:39:46.152 direct=1 00:39:46.152 bs=4096 00:39:46.152 iodepth=1 00:39:46.152 norandommap=0 00:39:46.152 numjobs=1 00:39:46.152 00:39:46.152 verify_dump=1 00:39:46.153 verify_backlog=512 00:39:46.153 verify_state_save=0 00:39:46.153 do_verify=1 00:39:46.153 verify=crc32c-intel 00:39:46.153 [job0] 00:39:46.153 filename=/dev/nvme0n1 00:39:46.153 [job1] 00:39:46.153 filename=/dev/nvme0n2 00:39:46.153 [job2] 00:39:46.153 filename=/dev/nvme0n3 00:39:46.153 [job3] 00:39:46.153 filename=/dev/nvme0n4 00:39:46.153 Could not set queue depth (nvme0n1) 00:39:46.153 Could not set queue depth (nvme0n2) 00:39:46.153 Could not set queue depth (nvme0n3) 00:39:46.153 Could not set queue depth (nvme0n4) 00:39:46.153 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:46.153 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:46.153 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:46.153 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:46.153 fio-3.35 00:39:46.153 Starting 4 threads 00:39:47.087 00:39:47.087 job0: (groupid=0, jobs=1): err= 0: pid=2526591: Mon Oct 28 05:15:37 2024 00:39:47.087 read: IOPS=151, BW=607KiB/s (622kB/s)(608KiB/1001msec) 00:39:47.087 slat (nsec): min=4930, max=33051, avg=14197.64, stdev=7297.45 00:39:47.087 clat (usec): min=334, max=41019, avg=5755.40, stdev=13745.74 00:39:47.087 lat (usec): min=342, max=41033, avg=5769.60, stdev=13751.40 00:39:47.087 clat percentiles (usec): 00:39:47.087 | 1.00th=[ 338], 5.00th=[ 351], 10.00th=[ 363], 20.00th=[ 383], 00:39:47.087 | 30.00th=[ 396], 40.00th=[ 404], 50.00th=[ 416], 60.00th=[ 457], 00:39:47.087 | 70.00th=[ 482], 80.00th=[ 494], 90.00th=[41157], 95.00th=[41157], 00:39:47.087 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:47.087 | 99.99th=[41157] 00:39:47.087 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:39:47.087 slat (nsec): min=5819, max=33722, avg=10514.56, stdev=6178.98 00:39:47.087 clat (usec): min=183, max=382, avg=226.99, stdev=24.85 00:39:47.087 lat (usec): min=191, max=388, avg=237.50, stdev=25.88 00:39:47.087 clat percentiles (usec): 00:39:47.087 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 208], 00:39:47.087 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:39:47.087 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 265], 00:39:47.087 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 383], 99.95th=[ 383], 00:39:47.087 | 99.99th=[ 383] 00:39:47.087 bw ( KiB/s): min= 4096, max= 4096, per=32.53%, avg=4096.00, stdev= 0.00, samples=1 00:39:47.087 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:47.087 lat (usec) : 250=68.67%, 500=27.11%, 750=1.20% 00:39:47.087 lat (msec) : 50=3.01% 00:39:47.087 cpu : usr=0.50%, sys=0.50%, ctx=664, majf=0, minf=2 00:39:47.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:47.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:47.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:47.087 issued rwts: total=152,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:47.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:47.087 job1: (groupid=0, jobs=1): err= 0: pid=2526609: Mon Oct 28 05:15:37 2024 00:39:47.087 read: IOPS=22, BW=89.1KiB/s (91.3kB/s)(92.0KiB/1032msec) 00:39:47.087 slat (nsec): min=9802, max=33562, avg=27738.48, stdev=8591.90 00:39:47.087 clat (usec): min=368, max=42073, avg=39206.38, stdev=8470.58 00:39:47.087 lat (usec): min=386, max=42087, avg=39234.12, stdev=8472.54 00:39:47.087 clat percentiles (usec): 00:39:47.087 | 1.00th=[ 367], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:39:47.087 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:47.087 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:47.087 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:47.087 | 99.99th=[42206] 00:39:47.087 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:39:47.087 slat (nsec): min=6384, max=40991, avg=12410.28, stdev=6451.35 00:39:47.087 clat (usec): min=167, max=412, avg=236.49, stdev=33.69 00:39:47.087 lat (usec): min=175, max=434, avg=248.90, stdev=34.95 00:39:47.087 clat percentiles (usec): 00:39:47.087 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 212], 00:39:47.087 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 239], 00:39:47.087 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 281], 95.00th=[ 302], 00:39:47.087 | 99.00th=[ 351], 99.50th=[ 383], 99.90th=[ 412], 99.95th=[ 412], 00:39:47.087 | 99.99th=[ 412] 00:39:47.087 bw ( KiB/s): min= 4096, max= 4096, per=32.53%, avg=4096.00, stdev= 0.00, samples=1 00:39:47.087 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:47.087 lat (usec) : 250=70.84%, 500=25.05% 00:39:47.087 lat (msec) : 50=4.11% 00:39:47.087 cpu : usr=0.58%, sys=0.58%, ctx=536, majf=0, minf=1 00:39:47.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:47.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:47.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:47.087 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:47.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:47.087 job2: (groupid=0, jobs=1): err= 0: pid=2526630: Mon Oct 28 05:15:37 2024 00:39:47.087 read: IOPS=20, BW=82.5KiB/s (84.5kB/s)(84.0KiB/1018msec) 00:39:47.087 slat (nsec): min=8839, max=14400, avg=13608.24, stdev=1106.24 00:39:47.087 clat (usec): min=40522, max=41084, avg=40954.34, stdev=110.37 00:39:47.087 lat (usec): min=40531, max=41098, avg=40967.95, stdev=111.34 00:39:47.087 clat percentiles (usec): 00:39:47.087 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:47.087 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:47.087 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:47.087 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:47.087 | 99.99th=[41157] 00:39:47.087 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:39:47.087 slat (nsec): min=8148, max=51954, avg=12626.95, stdev=5945.00 00:39:47.087 clat (usec): min=207, max=507, avg=290.62, stdev=40.70 00:39:47.087 lat (usec): min=219, max=518, avg=303.24, stdev=41.08 00:39:47.087 clat percentiles (usec): 00:39:47.087 | 1.00th=[ 223], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 258], 00:39:47.087 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:39:47.087 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 343], 95.00th=[ 363], 00:39:47.087 | 99.00th=[ 416], 99.50th=[ 429], 99.90th=[ 506], 99.95th=[ 506], 00:39:47.087 | 99.99th=[ 506] 00:39:47.087 bw ( KiB/s): min= 4096, max= 4096, per=32.53%, avg=4096.00, stdev= 0.00, samples=1 00:39:47.087 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:47.087 lat (usec) : 250=13.88%, 500=81.99%, 750=0.19% 00:39:47.088 lat (msec) : 50=3.94% 00:39:47.088 cpu : usr=0.29%, sys=0.69%, ctx=534, majf=0, minf=1 00:39:47.088 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:47.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:47.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:47.088 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:47.088 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:47.088 job3: (groupid=0, jobs=1): err= 0: pid=2526631: Mon Oct 28 05:15:37 2024 00:39:47.088 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:39:47.088 slat (nsec): min=6653, max=49765, avg=16078.51, stdev=5046.11 00:39:47.088 clat (usec): min=281, max=645, avg=320.53, stdev=33.92 00:39:47.088 lat (usec): min=288, max=661, avg=336.61, stdev=35.55 00:39:47.088 clat percentiles (usec): 00:39:47.088 | 1.00th=[ 285], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 302], 00:39:47.088 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:39:47.088 | 70.00th=[ 322], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 347], 00:39:47.088 | 99.00th=[ 502], 99.50th=[ 562], 99.90th=[ 619], 99.95th=[ 644], 00:39:47.088 | 99.99th=[ 644] 00:39:47.088 write: IOPS=1711, BW=6845KiB/s (7009kB/s)(6852KiB/1001msec); 0 zone resets 00:39:47.088 slat (nsec): min=8697, max=73744, avg=20555.29, stdev=7376.38 00:39:47.088 clat (usec): min=183, max=1436, avg=251.78, stdev=48.29 00:39:47.088 lat (usec): min=194, max=1477, avg=272.33, stdev=48.30 00:39:47.088 clat percentiles (usec): 00:39:47.088 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 221], 20.00th=[ 231], 00:39:47.088 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 245], 00:39:47.088 | 70.00th=[ 251], 80.00th=[ 269], 90.00th=[ 306], 95.00th=[ 326], 00:39:47.088 | 99.00th=[ 404], 99.50th=[ 453], 99.90th=[ 529], 99.95th=[ 1434], 00:39:47.088 | 99.99th=[ 1434] 00:39:47.088 bw ( KiB/s): min= 8192, max= 8192, per=65.05%, avg=8192.00, stdev= 0.00, samples=1 00:39:47.088 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:47.088 lat (usec) : 250=36.60%, 500=62.82%, 750=0.55% 00:39:47.088 lat (msec) : 2=0.03% 00:39:47.088 cpu : usr=3.70%, sys=8.60%, ctx=3250, majf=0, minf=1 00:39:47.088 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:47.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:47.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:47.088 issued rwts: total=1536,1713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:47.088 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:47.088 00:39:47.088 Run status group 0 (all jobs): 00:39:47.088 READ: bw=6713KiB/s (6874kB/s), 82.5KiB/s-6138KiB/s (84.5kB/s-6285kB/s), io=6928KiB (7094kB), run=1001-1032msec 00:39:47.088 WRITE: bw=12.3MiB/s (12.9MB/s), 1984KiB/s-6845KiB/s (2032kB/s-7009kB/s), io=12.7MiB (13.3MB), run=1001-1032msec 00:39:47.088 00:39:47.088 Disk stats (read/write): 00:39:47.088 nvme0n1: ios=67/512, merge=0/0, ticks=738/120, in_queue=858, util=87.07% 00:39:47.088 nvme0n2: ios=66/512, merge=0/0, ticks=1338/117, in_queue=1455, util=89.52% 00:39:47.088 nvme0n3: ios=73/512, merge=0/0, ticks=747/139, in_queue=886, util=95.50% 00:39:47.088 nvme0n4: ios=1205/1536, merge=0/0, ticks=1269/373, in_queue=1642, util=94.41% 00:39:47.088 05:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:47.088 [global] 00:39:47.088 thread=1 00:39:47.088 invalidate=1 00:39:47.088 rw=randwrite 00:39:47.088 time_based=1 00:39:47.088 runtime=1 00:39:47.088 ioengine=libaio 00:39:47.088 direct=1 00:39:47.088 bs=4096 00:39:47.088 iodepth=1 00:39:47.088 norandommap=0 00:39:47.088 numjobs=1 00:39:47.088 00:39:47.088 verify_dump=1 00:39:47.088 verify_backlog=512 00:39:47.088 verify_state_save=0 00:39:47.088 do_verify=1 00:39:47.088 verify=crc32c-intel 00:39:47.088 [job0] 00:39:47.088 filename=/dev/nvme0n1 00:39:47.088 [job1] 00:39:47.088 filename=/dev/nvme0n2 00:39:47.088 [job2] 00:39:47.088 filename=/dev/nvme0n3 00:39:47.088 [job3] 00:39:47.088 filename=/dev/nvme0n4 00:39:47.088 Could not set queue depth (nvme0n1) 00:39:47.088 Could not set queue depth (nvme0n2) 00:39:47.088 Could not set queue depth (nvme0n3) 00:39:47.088 Could not set queue depth (nvme0n4) 00:39:47.345 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.345 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.345 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.345 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.345 fio-3.35 00:39:47.345 Starting 4 threads 00:39:48.717 00:39:48.717 job0: (groupid=0, jobs=1): err= 0: pid=2526861: Mon Oct 28 05:15:39 2024 00:39:48.717 read: IOPS=610, BW=2444KiB/s (2502kB/s)(2468KiB/1010msec) 00:39:48.717 slat (nsec): min=5958, max=35176, avg=7420.69, stdev=3395.05 00:39:48.717 clat (usec): min=297, max=42350, avg=1154.15, stdev=5690.57 00:39:48.717 lat (usec): min=303, max=42359, avg=1161.57, stdev=5692.17 00:39:48.717 clat percentiles (usec): 00:39:48.717 | 1.00th=[ 306], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 326], 00:39:48.717 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 355], 00:39:48.718 | 70.00th=[ 363], 80.00th=[ 367], 90.00th=[ 388], 95.00th=[ 474], 00:39:48.718 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:48.718 | 99.99th=[42206] 00:39:48.718 write: IOPS=1013, BW=4055KiB/s (4153kB/s)(4096KiB/1010msec); 0 zone resets 00:39:48.718 slat (nsec): min=7662, max=37978, avg=11250.01, stdev=4848.46 00:39:48.718 clat (usec): min=182, max=524, avg=270.46, stdev=84.01 00:39:48.718 lat (usec): min=190, max=535, avg=281.71, stdev=86.18 00:39:48.718 clat percentiles (usec): 00:39:48.718 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:39:48.718 | 30.00th=[ 208], 40.00th=[ 221], 50.00th=[ 235], 60.00th=[ 253], 00:39:48.718 | 70.00th=[ 289], 80.00th=[ 363], 90.00th=[ 408], 95.00th=[ 441], 00:39:48.718 | 99.00th=[ 482], 99.50th=[ 490], 99.90th=[ 519], 99.95th=[ 523], 00:39:48.718 | 99.99th=[ 523] 00:39:48.718 bw ( KiB/s): min= 2216, max= 5964, per=34.68%, avg=4090.00, stdev=2650.24, samples=2 00:39:48.718 iops : min= 554, max= 1491, avg=1022.50, stdev=662.56, samples=2 00:39:48.718 lat (usec) : 250=36.75%, 500=61.43%, 750=1.10% 00:39:48.718 lat (msec) : 50=0.73% 00:39:48.718 cpu : usr=1.49%, sys=1.68%, ctx=1642, majf=0, minf=1 00:39:48.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.718 issued rwts: total=617,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:48.718 job1: (groupid=0, jobs=1): err= 0: pid=2526862: Mon Oct 28 05:15:39 2024 00:39:48.718 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:39:48.718 slat (nsec): min=6075, max=32149, avg=16122.64, stdev=6915.12 00:39:48.718 clat (usec): min=11118, max=42070, avg=40507.14, stdev=6568.37 00:39:48.718 lat (usec): min=11131, max=42083, avg=40523.26, stdev=6568.85 00:39:48.718 clat percentiles (usec): 00:39:48.718 | 1.00th=[11076], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:39:48.718 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:39:48.718 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:48.718 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:48.718 | 99.99th=[42206] 00:39:48.718 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:39:48.718 slat (nsec): min=5860, max=30063, avg=8796.11, stdev=3858.14 00:39:48.718 clat (usec): min=187, max=483, avg=252.97, stdev=52.67 00:39:48.718 lat (usec): min=194, max=490, avg=261.77, stdev=52.80 00:39:48.718 clat percentiles (usec): 00:39:48.718 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:39:48.718 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 243], 00:39:48.718 | 70.00th=[ 258], 80.00th=[ 293], 90.00th=[ 330], 95.00th=[ 379], 00:39:48.718 | 99.00th=[ 400], 99.50th=[ 465], 99.90th=[ 486], 99.95th=[ 486], 00:39:48.718 | 99.99th=[ 486] 00:39:48.718 bw ( KiB/s): min= 4087, max= 4087, per=34.66%, avg=4087.00, stdev= 0.00, samples=1 00:39:48.718 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:39:48.718 lat (usec) : 250=64.23%, 500=31.65% 00:39:48.718 lat (msec) : 20=0.19%, 50=3.93% 00:39:48.718 cpu : usr=0.39%, sys=0.29%, ctx=534, majf=0, minf=1 00:39:48.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.718 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:48.718 job2: (groupid=0, jobs=1): err= 0: pid=2526863: Mon Oct 28 05:15:39 2024 00:39:48.718 read: IOPS=499, BW=1996KiB/s (2044kB/s)(2080KiB/1042msec) 00:39:48.718 slat (nsec): min=4759, max=42909, avg=12524.84, stdev=5144.99 00:39:48.718 clat (usec): min=301, max=42361, avg=1429.06, stdev=6505.74 00:39:48.718 lat (usec): min=307, max=42375, avg=1441.59, stdev=6507.24 00:39:48.718 clat percentiles (usec): 00:39:48.718 | 1.00th=[ 310], 5.00th=[ 326], 10.00th=[ 338], 20.00th=[ 355], 00:39:48.718 | 30.00th=[ 379], 40.00th=[ 383], 50.00th=[ 388], 60.00th=[ 392], 00:39:48.718 | 70.00th=[ 404], 80.00th=[ 420], 90.00th=[ 449], 95.00th=[ 474], 00:39:48.718 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:48.718 | 99.99th=[42206] 00:39:48.718 write: IOPS=982, BW=3931KiB/s (4025kB/s)(4096KiB/1042msec); 0 zone resets 00:39:48.718 slat (nsec): min=6667, max=33628, avg=10908.42, stdev=4987.18 00:39:48.718 clat (usec): min=188, max=569, avg=269.86, stdev=85.93 00:39:48.718 lat (usec): min=196, max=586, avg=280.77, stdev=89.00 00:39:48.718 clat percentiles (usec): 00:39:48.718 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 206], 00:39:48.718 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 237], 00:39:48.718 | 70.00th=[ 306], 80.00th=[ 371], 90.00th=[ 400], 95.00th=[ 449], 00:39:48.718 | 99.00th=[ 494], 99.50th=[ 510], 99.90th=[ 519], 99.95th=[ 570], 00:39:48.718 | 99.99th=[ 570] 00:39:48.718 bw ( KiB/s): min= 4087, max= 4096, per=34.69%, avg=4091.50, stdev= 6.36, samples=2 00:39:48.718 iops : min= 1021, max= 1024, avg=1022.50, stdev= 2.12, samples=2 00:39:48.718 lat (usec) : 250=42.49%, 500=55.83%, 750=0.84% 00:39:48.718 lat (msec) : 50=0.84% 00:39:48.718 cpu : usr=0.96%, sys=1.54%, ctx=1545, majf=0, minf=1 00:39:48.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.718 issued rwts: total=520,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:48.718 job3: (groupid=0, jobs=1): err= 0: pid=2526864: Mon Oct 28 05:15:39 2024 00:39:48.718 read: IOPS=21, BW=85.5KiB/s (87.6kB/s)(88.0KiB/1029msec) 00:39:48.718 slat (nsec): min=7110, max=33325, avg=17879.36, stdev=7713.64 00:39:48.718 clat (usec): min=40938, max=41390, avg=40997.30, stdev=90.94 00:39:48.718 lat (usec): min=40956, max=41397, avg=41015.18, stdev=87.96 00:39:48.718 clat percentiles (usec): 00:39:48.718 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:48.718 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:48.718 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:48.718 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:48.718 | 99.99th=[41157] 00:39:48.718 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:39:48.718 slat (nsec): min=6168, max=31475, avg=8310.92, stdev=3696.86 00:39:48.718 clat (usec): min=191, max=482, avg=236.54, stdev=30.54 00:39:48.718 lat (usec): min=198, max=493, avg=244.85, stdev=31.27 00:39:48.718 clat percentiles (usec): 00:39:48.718 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:39:48.718 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 239], 00:39:48.718 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 281], 00:39:48.718 | 99.00th=[ 375], 99.50th=[ 420], 99.90th=[ 482], 99.95th=[ 482], 00:39:48.718 | 99.99th=[ 482] 00:39:48.718 bw ( KiB/s): min= 4087, max= 4087, per=34.66%, avg=4087.00, stdev= 0.00, samples=1 00:39:48.718 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:39:48.718 lat (usec) : 250=78.28%, 500=17.60% 00:39:48.718 lat (msec) : 50=4.12% 00:39:48.718 cpu : usr=0.10%, sys=0.49%, ctx=534, majf=0, minf=1 00:39:48.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.718 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:48.718 00:39:48.718 Run status group 0 (all jobs): 00:39:48.718 READ: bw=4534KiB/s (4642kB/s), 85.5KiB/s-2444KiB/s (87.6kB/s-2502kB/s), io=4724KiB (4837kB), run=1010-1042msec 00:39:48.718 WRITE: bw=11.5MiB/s (12.1MB/s), 1990KiB/s-4055KiB/s (2038kB/s-4153kB/s), io=12.0MiB (12.6MB), run=1010-1042msec 00:39:48.718 00:39:48.718 Disk stats (read/write): 00:39:48.718 nvme0n1: ios=639/1024, merge=0/0, ticks=1521/265, in_queue=1786, util=98.30% 00:39:48.718 nvme0n2: ios=17/512, merge=0/0, ticks=683/130, in_queue=813, util=86.60% 00:39:48.718 nvme0n3: ios=539/1024, merge=0/0, ticks=1514/267, in_queue=1781, util=98.23% 00:39:48.718 nvme0n4: ios=17/512, merge=0/0, ticks=698/120, in_queue=818, util=89.70% 00:39:48.718 05:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:48.718 [global] 00:39:48.718 thread=1 00:39:48.718 invalidate=1 00:39:48.718 rw=write 00:39:48.718 time_based=1 00:39:48.718 runtime=1 00:39:48.718 ioengine=libaio 00:39:48.718 direct=1 00:39:48.718 bs=4096 00:39:48.718 iodepth=128 00:39:48.718 norandommap=0 00:39:48.718 numjobs=1 00:39:48.718 00:39:48.718 verify_dump=1 00:39:48.718 verify_backlog=512 00:39:48.718 verify_state_save=0 00:39:48.718 do_verify=1 00:39:48.718 verify=crc32c-intel 00:39:48.718 [job0] 00:39:48.718 filename=/dev/nvme0n1 00:39:48.718 [job1] 00:39:48.718 filename=/dev/nvme0n2 00:39:48.718 [job2] 00:39:48.718 filename=/dev/nvme0n3 00:39:48.718 [job3] 00:39:48.718 filename=/dev/nvme0n4 00:39:48.718 Could not set queue depth (nvme0n1) 00:39:48.718 Could not set queue depth (nvme0n2) 00:39:48.718 Could not set queue depth (nvme0n3) 00:39:48.718 Could not set queue depth (nvme0n4) 00:39:48.977 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:48.977 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:48.977 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:48.977 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:48.977 fio-3.35 00:39:48.977 Starting 4 threads 00:39:50.351 00:39:50.351 job0: (groupid=0, jobs=1): err= 0: pid=2527080: Mon Oct 28 05:15:40 2024 00:39:50.351 read: IOPS=2860, BW=11.2MiB/s (11.7MB/s)(11.3MiB/1008msec) 00:39:50.351 slat (usec): min=2, max=39858, avg=174.22, stdev=1237.65 00:39:50.351 clat (usec): min=3439, max=59355, avg=20630.49, stdev=8472.77 00:39:50.351 lat (usec): min=8903, max=59400, avg=20804.71, stdev=8522.80 00:39:50.351 clat percentiles (usec): 00:39:50.351 | 1.00th=[ 9896], 5.00th=[11469], 10.00th=[12125], 20.00th=[13960], 00:39:50.351 | 30.00th=[15401], 40.00th=[16909], 50.00th=[19006], 60.00th=[21890], 00:39:50.351 | 70.00th=[23462], 80.00th=[25560], 90.00th=[30278], 95.00th=[33424], 00:39:50.351 | 99.00th=[57934], 99.50th=[57934], 99.90th=[58459], 99.95th=[58459], 00:39:50.351 | 99.99th=[59507] 00:39:50.351 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:39:50.351 slat (usec): min=3, max=15162, avg=155.73, stdev=905.98 00:39:50.351 clat (usec): min=8397, max=74836, avg=22157.26, stdev=13500.80 00:39:50.351 lat (usec): min=8405, max=74844, avg=22312.98, stdev=13587.37 00:39:50.351 clat percentiles (usec): 00:39:50.351 | 1.00th=[10028], 5.00th=[10290], 10.00th=[10552], 20.00th=[12256], 00:39:50.351 | 30.00th=[14091], 40.00th=[15139], 50.00th=[17957], 60.00th=[21365], 00:39:50.351 | 70.00th=[24511], 80.00th=[29492], 90.00th=[32637], 95.00th=[57410], 00:39:50.351 | 99.00th=[70779], 99.50th=[71828], 99.90th=[74974], 99.95th=[74974], 00:39:50.351 | 99.99th=[74974] 00:39:50.351 bw ( KiB/s): min=12232, max=12344, per=20.13%, avg=12288.00, stdev=79.20, samples=2 00:39:50.351 iops : min= 3058, max= 3086, avg=3072.00, stdev=19.80, samples=2 00:39:50.351 lat (msec) : 4=0.02%, 10=1.02%, 20=53.40%, 50=40.79%, 100=4.77% 00:39:50.351 cpu : usr=2.78%, sys=4.97%, ctx=290, majf=0, minf=2 00:39:50.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:39:50.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:50.351 issued rwts: total=2883,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:50.351 job1: (groupid=0, jobs=1): err= 0: pid=2527081: Mon Oct 28 05:15:40 2024 00:39:50.351 read: IOPS=5041, BW=19.7MiB/s (20.7MB/s)(19.9MiB/1008msec) 00:39:50.351 slat (usec): min=2, max=20618, avg=100.61, stdev=827.67 00:39:50.351 clat (usec): min=3246, max=74556, avg=12820.86, stdev=8748.48 00:39:50.351 lat (usec): min=3251, max=74560, avg=12921.47, stdev=8801.29 00:39:50.351 clat percentiles (usec): 00:39:50.351 | 1.00th=[ 5604], 5.00th=[ 7373], 10.00th=[ 8225], 20.00th=[ 8848], 00:39:50.351 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[11076], 00:39:50.351 | 70.00th=[12780], 80.00th=[14877], 90.00th=[17957], 95.00th=[20317], 00:39:50.351 | 99.00th=[63177], 99.50th=[73925], 99.90th=[74974], 99.95th=[74974], 00:39:50.351 | 99.99th=[74974] 00:39:50.351 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:39:50.351 slat (usec): min=3, max=18216, avg=87.47, stdev=718.16 00:39:50.351 clat (usec): min=2488, max=36374, avg=11788.13, stdev=5077.35 00:39:50.351 lat (usec): min=2498, max=36388, avg=11875.60, stdev=5105.58 00:39:50.351 clat percentiles (usec): 00:39:50.351 | 1.00th=[ 4490], 5.00th=[ 6718], 10.00th=[ 7504], 20.00th=[ 8356], 00:39:50.351 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10814], 60.00th=[11338], 00:39:50.351 | 70.00th=[11863], 80.00th=[14222], 90.00th=[15664], 95.00th=[21103], 00:39:50.351 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:39:50.351 | 99.99th=[36439] 00:39:50.351 bw ( KiB/s): min=17120, max=23840, per=33.55%, avg=20480.00, stdev=4751.76, samples=2 00:39:50.351 iops : min= 4280, max= 5960, avg=5120.00, stdev=1187.94, samples=2 00:39:50.351 lat (msec) : 4=0.52%, 10=39.87%, 20=52.76%, 50=5.69%, 100=1.15% 00:39:50.351 cpu : usr=3.57%, sys=7.05%, ctx=367, majf=0, minf=1 00:39:50.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:50.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:50.351 issued rwts: total=5082,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:50.351 job2: (groupid=0, jobs=1): err= 0: pid=2527082: Mon Oct 28 05:15:40 2024 00:39:50.351 read: IOPS=4451, BW=17.4MiB/s (18.2MB/s)(17.5MiB/1005msec) 00:39:50.351 slat (usec): min=2, max=14126, avg=107.87, stdev=780.10 00:39:50.351 clat (usec): min=2557, max=30687, avg=13459.82, stdev=3523.48 00:39:50.351 lat (usec): min=3685, max=30702, avg=13567.70, stdev=3569.44 00:39:50.351 clat percentiles (usec): 00:39:50.351 | 1.00th=[ 6259], 5.00th=[ 8586], 10.00th=[11207], 20.00th=[11338], 00:39:50.351 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12518], 60.00th=[13042], 00:39:50.351 | 70.00th=[13829], 80.00th=[15926], 90.00th=[18744], 95.00th=[20841], 00:39:50.351 | 99.00th=[23725], 99.50th=[24511], 99.90th=[26870], 99.95th=[26870], 00:39:50.351 | 99.99th=[30802] 00:39:50.351 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:39:50.351 slat (usec): min=3, max=31344, avg=101.67, stdev=780.26 00:39:50.351 clat (usec): min=731, max=54338, avg=14542.56, stdev=7035.04 00:39:50.351 lat (usec): min=747, max=54372, avg=14644.23, stdev=7103.40 00:39:50.351 clat percentiles (usec): 00:39:50.351 | 1.00th=[ 3261], 5.00th=[ 6587], 10.00th=[ 8848], 20.00th=[10290], 00:39:50.351 | 30.00th=[11731], 40.00th=[12387], 50.00th=[13042], 60.00th=[13304], 00:39:50.351 | 70.00th=[13435], 80.00th=[17695], 90.00th=[23725], 95.00th=[32375], 00:39:50.351 | 99.00th=[37487], 99.50th=[39060], 99.90th=[40109], 99.95th=[40109], 00:39:50.351 | 99.99th=[54264] 00:39:50.351 bw ( KiB/s): min=16384, max=20480, per=30.19%, avg=18432.00, stdev=2896.31, samples=2 00:39:50.351 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:39:50.351 lat (usec) : 750=0.02%, 1000=0.02% 00:39:50.351 lat (msec) : 2=0.09%, 4=0.97%, 10=12.28%, 20=75.49%, 50=11.12% 00:39:50.351 lat (msec) : 100=0.01% 00:39:50.351 cpu : usr=4.58%, sys=5.28%, ctx=469, majf=0, minf=1 00:39:50.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:50.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:50.351 issued rwts: total=4474,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:50.351 job3: (groupid=0, jobs=1): err= 0: pid=2527083: Mon Oct 28 05:15:40 2024 00:39:50.351 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:39:50.351 slat (usec): min=2, max=20582, avg=220.36, stdev=1365.29 00:39:50.351 clat (usec): min=5097, max=60756, avg=28729.70, stdev=13003.39 00:39:50.351 lat (usec): min=5105, max=60771, avg=28950.06, stdev=13117.24 00:39:50.351 clat percentiles (usec): 00:39:50.351 | 1.00th=[ 5211], 5.00th=[10552], 10.00th=[11994], 20.00th=[16581], 00:39:50.351 | 30.00th=[18744], 40.00th=[23725], 50.00th=[28443], 60.00th=[31327], 00:39:50.351 | 70.00th=[36439], 80.00th=[41157], 90.00th=[46400], 95.00th=[50070], 00:39:50.351 | 99.00th=[58459], 99.50th=[58459], 99.90th=[58459], 99.95th=[58459], 00:39:50.351 | 99.99th=[60556] 00:39:50.351 write: IOPS=2562, BW=10.0MiB/s (10.5MB/s)(10.1MiB/1008msec); 0 zone resets 00:39:50.351 slat (usec): min=3, max=14343, avg=161.59, stdev=1005.32 00:39:50.351 clat (usec): min=3637, max=54108, avg=20977.53, stdev=8850.65 00:39:50.351 lat (usec): min=6521, max=54113, avg=21139.11, stdev=8895.21 00:39:50.351 clat percentiles (usec): 00:39:50.351 | 1.00th=[ 6652], 5.00th=[ 9503], 10.00th=[11600], 20.00th=[12387], 00:39:50.351 | 30.00th=[16188], 40.00th=[17171], 50.00th=[19006], 60.00th=[20841], 00:39:50.351 | 70.00th=[26084], 80.00th=[29492], 90.00th=[31589], 95.00th=[36439], 00:39:50.351 | 99.00th=[47973], 99.50th=[51119], 99.90th=[53740], 99.95th=[54264], 00:39:50.351 | 99.99th=[54264] 00:39:50.351 bw ( KiB/s): min= 8192, max=12288, per=16.77%, avg=10240.00, stdev=2896.31, samples=2 00:39:50.351 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:39:50.351 lat (msec) : 4=0.02%, 10=4.92%, 20=38.25%, 50=53.67%, 100=3.15% 00:39:50.351 cpu : usr=2.18%, sys=3.38%, ctx=197, majf=0, minf=1 00:39:50.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:39:50.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:50.351 issued rwts: total=2560,2583,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:50.351 00:39:50.351 Run status group 0 (all jobs): 00:39:50.351 READ: bw=58.1MiB/s (60.9MB/s), 9.92MiB/s-19.7MiB/s (10.4MB/s-20.7MB/s), io=58.6MiB (61.4MB), run=1005-1008msec 00:39:50.351 WRITE: bw=59.6MiB/s (62.5MB/s), 10.0MiB/s-19.8MiB/s (10.5MB/s-20.8MB/s), io=60.1MiB (63.0MB), run=1005-1008msec 00:39:50.351 00:39:50.351 Disk stats (read/write): 00:39:50.351 nvme0n1: ios=2610/2831, merge=0/0, ticks=26455/25122, in_queue=51577, util=86.47% 00:39:50.351 nvme0n2: ios=4145/4177, merge=0/0, ticks=48617/45997, in_queue=94614, util=88.91% 00:39:50.351 nvme0n3: ios=3641/3919, merge=0/0, ticks=46430/54980, in_queue=101410, util=95.07% 00:39:50.351 nvme0n4: ios=2111/2201, merge=0/0, ticks=27426/24359, in_queue=51785, util=95.34% 00:39:50.351 05:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:50.351 [global] 00:39:50.351 thread=1 00:39:50.351 invalidate=1 00:39:50.351 rw=randwrite 00:39:50.351 time_based=1 00:39:50.351 runtime=1 00:39:50.351 ioengine=libaio 00:39:50.351 direct=1 00:39:50.351 bs=4096 00:39:50.351 iodepth=128 00:39:50.351 norandommap=0 00:39:50.351 numjobs=1 00:39:50.351 00:39:50.351 verify_dump=1 00:39:50.351 verify_backlog=512 00:39:50.351 verify_state_save=0 00:39:50.351 do_verify=1 00:39:50.351 verify=crc32c-intel 00:39:50.351 [job0] 00:39:50.352 filename=/dev/nvme0n1 00:39:50.352 [job1] 00:39:50.352 filename=/dev/nvme0n2 00:39:50.352 [job2] 00:39:50.352 filename=/dev/nvme0n3 00:39:50.352 [job3] 00:39:50.352 filename=/dev/nvme0n4 00:39:50.352 Could not set queue depth (nvme0n1) 00:39:50.352 Could not set queue depth (nvme0n2) 00:39:50.352 Could not set queue depth (nvme0n3) 00:39:50.352 Could not set queue depth (nvme0n4) 00:39:50.352 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:50.352 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:50.352 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:50.352 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:50.352 fio-3.35 00:39:50.352 Starting 4 threads 00:39:51.728 00:39:51.728 job0: (groupid=0, jobs=1): err= 0: pid=2527311: Mon Oct 28 05:15:41 2024 00:39:51.728 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:39:51.728 slat (usec): min=2, max=24400, avg=173.89, stdev=1158.83 00:39:51.728 clat (usec): min=8355, max=70190, avg=22940.92, stdev=14208.95 00:39:51.728 lat (usec): min=8366, max=70197, avg=23114.80, stdev=14296.00 00:39:51.728 clat percentiles (usec): 00:39:51.728 | 1.00th=[ 8717], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11338], 00:39:51.728 | 30.00th=[12125], 40.00th=[12911], 50.00th=[18482], 60.00th=[22676], 00:39:51.728 | 70.00th=[28443], 80.00th=[33817], 90.00th=[45876], 95.00th=[54264], 00:39:51.728 | 99.00th=[62129], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:39:51.728 | 99.99th=[69731] 00:39:51.728 write: IOPS=3508, BW=13.7MiB/s (14.4MB/s)(13.7MiB/1003msec); 0 zone resets 00:39:51.728 slat (usec): min=3, max=23961, avg=120.20, stdev=807.91 00:39:51.728 clat (usec): min=1166, max=52538, avg=16006.65, stdev=8712.21 00:39:51.728 lat (usec): min=1184, max=52554, avg=16126.85, stdev=8762.38 00:39:51.728 clat percentiles (usec): 00:39:51.728 | 1.00th=[ 4359], 5.00th=[ 7701], 10.00th=[ 8225], 20.00th=[ 9372], 00:39:51.728 | 30.00th=[10945], 40.00th=[11731], 50.00th=[13042], 60.00th=[15008], 00:39:51.728 | 70.00th=[17433], 80.00th=[20841], 90.00th=[28967], 95.00th=[36439], 00:39:51.728 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[45351], 00:39:51.728 | 99.99th=[52691] 00:39:51.728 bw ( KiB/s): min=12288, max=14848, per=23.48%, avg=13568.00, stdev=1810.19, samples=2 00:39:51.728 iops : min= 3072, max= 3712, avg=3392.00, stdev=452.55, samples=2 00:39:51.728 lat (msec) : 2=0.03%, 4=0.24%, 10=17.04%, 20=50.05%, 50=29.15% 00:39:51.728 lat (msec) : 100=3.49% 00:39:51.728 cpu : usr=2.79%, sys=5.39%, ctx=320, majf=0, minf=1 00:39:51.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:39:51.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:51.728 issued rwts: total=3072,3519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:51.728 job1: (groupid=0, jobs=1): err= 0: pid=2527312: Mon Oct 28 05:15:41 2024 00:39:51.728 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:39:51.728 slat (usec): min=2, max=9305, avg=115.63, stdev=625.72 00:39:51.728 clat (usec): min=3664, max=34667, avg=14855.33, stdev=5581.99 00:39:51.728 lat (usec): min=3668, max=34700, avg=14970.97, stdev=5608.89 00:39:51.728 clat percentiles (usec): 00:39:51.728 | 1.00th=[ 6128], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[11207], 00:39:51.728 | 30.00th=[11994], 40.00th=[12387], 50.00th=[13042], 60.00th=[13698], 00:39:51.728 | 70.00th=[15139], 80.00th=[17695], 90.00th=[23725], 95.00th=[27919], 00:39:51.728 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:39:51.728 | 99.99th=[34866] 00:39:51.728 write: IOPS=3090, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1002msec); 0 zone resets 00:39:51.728 slat (usec): min=3, max=28393, avg=199.21, stdev=1433.05 00:39:51.728 clat (usec): min=1568, max=79742, avg=25556.00, stdev=17998.68 00:39:51.728 lat (usec): min=1579, max=80477, avg=25755.21, stdev=18082.43 00:39:51.728 clat percentiles (usec): 00:39:51.728 | 1.00th=[ 6980], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11994], 00:39:51.728 | 30.00th=[13435], 40.00th=[15008], 50.00th=[17171], 60.00th=[22676], 00:39:51.728 | 70.00th=[27919], 80.00th=[36963], 90.00th=[56361], 95.00th=[66847], 00:39:51.728 | 99.00th=[80217], 99.50th=[80217], 99.90th=[80217], 99.95th=[80217], 00:39:51.728 | 99.99th=[80217] 00:39:51.728 bw ( KiB/s): min=10176, max=14400, per=21.26%, avg=12288.00, stdev=2986.82, samples=2 00:39:51.728 iops : min= 2544, max= 3600, avg=3072.00, stdev=746.70, samples=2 00:39:51.728 lat (msec) : 2=0.28%, 4=0.31%, 10=7.83%, 20=60.45%, 50=24.01% 00:39:51.728 lat (msec) : 100=7.13% 00:39:51.728 cpu : usr=3.40%, sys=5.19%, ctx=324, majf=0, minf=1 00:39:51.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:39:51.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:51.729 issued rwts: total=3072,3097,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:51.729 job2: (groupid=0, jobs=1): err= 0: pid=2527313: Mon Oct 28 05:15:41 2024 00:39:51.729 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:39:51.729 slat (usec): min=3, max=17622, avg=145.62, stdev=925.10 00:39:51.729 clat (usec): min=4997, max=62078, avg=17487.74, stdev=7067.24 00:39:51.729 lat (usec): min=5013, max=62096, avg=17633.35, stdev=7140.04 00:39:51.729 clat percentiles (usec): 00:39:51.729 | 1.00th=[ 9503], 5.00th=[11469], 10.00th=[13435], 20.00th=[14091], 00:39:51.729 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[16188], 00:39:51.729 | 70.00th=[16712], 80.00th=[19268], 90.00th=[22938], 95.00th=[31851], 00:39:51.729 | 99.00th=[51643], 99.50th=[56886], 99.90th=[62129], 99.95th=[62129], 00:39:51.729 | 99.99th=[62129] 00:39:51.729 write: IOPS=3345, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1011msec); 0 zone resets 00:39:51.729 slat (usec): min=3, max=12272, avg=149.70, stdev=686.42 00:39:51.729 clat (usec): min=1546, max=62082, avg=22007.69, stdev=12127.23 00:39:51.729 lat (usec): min=1556, max=62103, avg=22157.39, stdev=12205.16 00:39:51.729 clat percentiles (usec): 00:39:51.729 | 1.00th=[ 6063], 5.00th=[ 8848], 10.00th=[11863], 20.00th=[13304], 00:39:51.729 | 30.00th=[13435], 40.00th=[15139], 50.00th=[17171], 60.00th=[19792], 00:39:51.729 | 70.00th=[24249], 80.00th=[33817], 90.00th=[43779], 95.00th=[45351], 00:39:51.729 | 99.00th=[51643], 99.50th=[52691], 99.90th=[53740], 99.95th=[62129], 00:39:51.729 | 99.99th=[62129] 00:39:51.729 bw ( KiB/s): min= 9656, max=16384, per=22.53%, avg=13020.00, stdev=4757.41, samples=2 00:39:51.729 iops : min= 2414, max= 4096, avg=3255.00, stdev=1189.35, samples=2 00:39:51.729 lat (msec) : 2=0.05%, 4=0.09%, 10=3.59%, 20=68.10%, 50=26.91% 00:39:51.729 lat (msec) : 100=1.26% 00:39:51.729 cpu : usr=4.75%, sys=9.31%, ctx=352, majf=0, minf=1 00:39:51.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:39:51.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:51.729 issued rwts: total=3072,3382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:51.729 job3: (groupid=0, jobs=1): err= 0: pid=2527317: Mon Oct 28 05:15:41 2024 00:39:51.729 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1003msec) 00:39:51.729 slat (usec): min=2, max=15322, avg=106.00, stdev=683.67 00:39:51.729 clat (usec): min=1265, max=57667, avg=14782.78, stdev=4592.54 00:39:51.729 lat (usec): min=1323, max=58124, avg=14888.78, stdev=4616.58 00:39:51.729 clat percentiles (usec): 00:39:51.729 | 1.00th=[ 4015], 5.00th=[ 9503], 10.00th=[10683], 20.00th=[11469], 00:39:51.729 | 30.00th=[12125], 40.00th=[12911], 50.00th=[13829], 60.00th=[15401], 00:39:51.729 | 70.00th=[16909], 80.00th=[17957], 90.00th=[21365], 95.00th=[22414], 00:39:51.729 | 99.00th=[28443], 99.50th=[28705], 99.90th=[57410], 99.95th=[57410], 00:39:51.729 | 99.99th=[57410] 00:39:51.729 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:39:51.729 slat (usec): min=3, max=13880, avg=96.51, stdev=662.69 00:39:51.729 clat (usec): min=1232, max=28952, avg=12921.26, stdev=3664.50 00:39:51.729 lat (usec): min=1239, max=28959, avg=13017.77, stdev=3688.23 00:39:51.729 clat percentiles (usec): 00:39:51.729 | 1.00th=[ 3130], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[10159], 00:39:51.729 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12649], 60.00th=[13173], 00:39:51.729 | 70.00th=[13829], 80.00th=[15533], 90.00th=[17957], 95.00th=[19530], 00:39:51.729 | 99.00th=[23200], 99.50th=[23987], 99.90th=[28967], 99.95th=[28967], 00:39:51.729 | 99.99th=[28967] 00:39:51.729 bw ( KiB/s): min=17320, max=19544, per=31.90%, avg=18432.00, stdev=1572.61, samples=2 00:39:51.729 iops : min= 4330, max= 4886, avg=4608.00, stdev=393.15, samples=2 00:39:51.729 lat (msec) : 2=0.14%, 4=0.86%, 10=11.56%, 20=79.81%, 50=7.55% 00:39:51.729 lat (msec) : 100=0.08% 00:39:51.729 cpu : usr=4.59%, sys=7.39%, ctx=346, majf=0, minf=1 00:39:51.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:39:51.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:51.729 issued rwts: total=4580,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:51.729 00:39:51.729 Run status group 0 (all jobs): 00:39:51.729 READ: bw=53.3MiB/s (55.9MB/s), 11.9MiB/s-17.8MiB/s (12.4MB/s-18.7MB/s), io=53.9MiB (56.5MB), run=1002-1011msec 00:39:51.729 WRITE: bw=56.4MiB/s (59.2MB/s), 12.1MiB/s-17.9MiB/s (12.7MB/s-18.8MB/s), io=57.1MiB (59.8MB), run=1002-1011msec 00:39:51.729 00:39:51.729 Disk stats (read/write): 00:39:51.729 nvme0n1: ios=2552/2560, merge=0/0, ticks=21672/17898, in_queue=39570, util=98.40% 00:39:51.729 nvme0n2: ios=2608/2567, merge=0/0, ticks=10450/16548, in_queue=26998, util=95.23% 00:39:51.729 nvme0n3: ios=2617/2975, merge=0/0, ticks=42973/60782, in_queue=103755, util=98.33% 00:39:51.729 nvme0n4: ios=3957/4096, merge=0/0, ticks=28125/25021, in_queue=53146, util=98.32% 00:39:51.729 05:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:51.729 05:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2527460 00:39:51.729 05:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:51.729 05:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:51.729 [global] 00:39:51.729 thread=1 00:39:51.729 invalidate=1 00:39:51.729 rw=read 00:39:51.729 time_based=1 00:39:51.729 runtime=10 00:39:51.729 ioengine=libaio 00:39:51.729 direct=1 00:39:51.729 bs=4096 00:39:51.729 iodepth=1 00:39:51.729 norandommap=1 00:39:51.729 numjobs=1 00:39:51.729 00:39:51.729 [job0] 00:39:51.729 filename=/dev/nvme0n1 00:39:51.729 [job1] 00:39:51.729 filename=/dev/nvme0n2 00:39:51.729 [job2] 00:39:51.729 filename=/dev/nvme0n3 00:39:51.729 [job3] 00:39:51.729 filename=/dev/nvme0n4 00:39:51.729 Could not set queue depth (nvme0n1) 00:39:51.729 Could not set queue depth (nvme0n2) 00:39:51.729 Could not set queue depth (nvme0n3) 00:39:51.729 Could not set queue depth (nvme0n4) 00:39:51.729 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:51.729 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:51.729 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:51.729 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:51.729 fio-3.35 00:39:51.729 Starting 4 threads 00:39:55.145 05:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:55.145 05:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:55.145 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=294912, buflen=4096 00:39:55.145 fio: pid=2527657, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:55.145 05:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:55.145 05:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:55.145 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=38924288, buflen=4096 00:39:55.145 fio: pid=2527656, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:55.403 05:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:55.403 05:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:55.403 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10452992, buflen=4096 00:39:55.403 fio: pid=2527654, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:55.662 05:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:55.662 05:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:55.662 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1519616, buflen=4096 00:39:55.662 fio: pid=2527655, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:55.662 00:39:55.662 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2527654: Mon Oct 28 05:15:46 2024 00:39:55.662 read: IOPS=726, BW=2906KiB/s (2976kB/s)(9.97MiB/3513msec) 00:39:55.662 slat (usec): min=4, max=10888, avg=14.58, stdev=215.34 00:39:55.662 clat (usec): min=248, max=41137, avg=1350.09, stdev=6305.63 00:39:55.662 lat (usec): min=255, max=51994, avg=1364.66, stdev=6337.81 00:39:55.662 clat percentiles (usec): 00:39:55.662 | 1.00th=[ 262], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 281], 00:39:55.662 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 318], 60.00th=[ 363], 00:39:55.662 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 498], 95.00th=[ 519], 00:39:55.662 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:55.662 | 99.99th=[41157] 00:39:55.662 bw ( KiB/s): min= 96, max=10160, per=25.68%, avg=3386.67, stdev=5096.29, samples=6 00:39:55.662 iops : min= 24, max= 2540, avg=846.67, stdev=1274.07, samples=6 00:39:55.662 lat (usec) : 250=0.08%, 500=90.83%, 750=6.58% 00:39:55.662 lat (msec) : 50=2.47% 00:39:55.662 cpu : usr=0.28%, sys=0.88%, ctx=2554, majf=0, minf=1 00:39:55.662 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:55.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.662 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.662 issued rwts: total=2553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.662 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:55.662 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2527655: Mon Oct 28 05:15:46 2024 00:39:55.662 read: IOPS=98, BW=391KiB/s (401kB/s)(1484KiB/3792msec) 00:39:55.662 slat (usec): min=6, max=13854, avg=84.06, stdev=847.22 00:39:55.662 clat (usec): min=267, max=43026, avg=10069.42, stdev=17402.44 00:39:55.662 lat (usec): min=278, max=48986, avg=10116.37, stdev=17466.53 00:39:55.662 clat percentiles (usec): 00:39:55.662 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 293], 00:39:55.662 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 302], 60.00th=[ 310], 00:39:55.662 | 70.00th=[ 322], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:55.662 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:39:55.662 | 99.99th=[43254] 00:39:55.662 bw ( KiB/s): min= 96, max= 2264, per=3.14%, avg=414.14, stdev=815.85, samples=7 00:39:55.662 iops : min= 24, max= 566, avg=103.43, stdev=204.01, samples=7 00:39:55.662 lat (usec) : 500=74.73%, 750=1.08% 00:39:55.662 lat (msec) : 50=23.92% 00:39:55.662 cpu : usr=0.11%, sys=0.16%, ctx=375, majf=0, minf=1 00:39:55.662 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:55.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.662 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.662 issued rwts: total=372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.662 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:55.662 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2527656: Mon Oct 28 05:15:46 2024 00:39:55.662 read: IOPS=2958, BW=11.6MiB/s (12.1MB/s)(37.1MiB/3212msec) 00:39:55.662 slat (nsec): min=4524, max=50177, avg=9577.81, stdev=4529.20 00:39:55.662 clat (usec): min=252, max=41190, avg=323.53, stdev=723.90 00:39:55.662 lat (usec): min=258, max=41218, avg=333.10, stdev=724.11 00:39:55.662 clat percentiles (usec): 00:39:55.662 | 1.00th=[ 269], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 289], 00:39:55.662 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:39:55.662 | 70.00th=[ 314], 80.00th=[ 326], 90.00th=[ 343], 95.00th=[ 359], 00:39:55.662 | 99.00th=[ 449], 99.50th=[ 586], 99.90th=[ 1139], 99.95th=[ 1762], 00:39:55.662 | 99.99th=[41157] 00:39:55.662 bw ( KiB/s): min=10776, max=12968, per=90.63%, avg=11948.00, stdev=981.36, samples=6 00:39:55.662 iops : min= 2694, max= 3242, avg=2987.00, stdev=245.34, samples=6 00:39:55.662 lat (usec) : 500=99.33%, 750=0.28%, 1000=0.25% 00:39:55.662 lat (msec) : 2=0.08%, 10=0.01%, 50=0.03% 00:39:55.662 cpu : usr=1.53%, sys=4.30%, ctx=9504, majf=0, minf=2 00:39:55.662 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:55.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.662 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.662 issued rwts: total=9504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.662 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:55.662 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2527657: Mon Oct 28 05:15:46 2024 00:39:55.662 read: IOPS=24, BW=98.1KiB/s (100kB/s)(288KiB/2935msec) 00:39:55.662 slat (nsec): min=8694, max=34999, avg=18773.93, stdev=8157.27 00:39:55.662 clat (usec): min=475, max=41081, avg=40414.44, stdev=4773.22 00:39:55.662 lat (usec): min=504, max=41090, avg=40433.28, stdev=4772.06 00:39:55.662 clat percentiles (usec): 00:39:55.662 | 1.00th=[ 478], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:55.662 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:55.662 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:55.662 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:55.662 | 99.99th=[41157] 00:39:55.662 bw ( KiB/s): min= 96, max= 104, per=0.75%, avg=99.20, stdev= 4.38, samples=5 00:39:55.662 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:39:55.662 lat (usec) : 500=1.37% 00:39:55.662 lat (msec) : 50=97.26% 00:39:55.662 cpu : usr=0.07%, sys=0.00%, ctx=73, majf=0, minf=2 00:39:55.662 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:55.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.662 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.662 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.662 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:55.662 00:39:55.662 Run status group 0 (all jobs): 00:39:55.662 READ: bw=12.9MiB/s (13.5MB/s), 98.1KiB/s-11.6MiB/s (100kB/s-12.1MB/s), io=48.8MiB (51.2MB), run=2935-3792msec 00:39:55.662 00:39:55.662 Disk stats (read/write): 00:39:55.662 nvme0n1: ios=2549/0, merge=0/0, ticks=3309/0, in_queue=3309, util=96.02% 00:39:55.662 nvme0n2: ios=366/0, merge=0/0, ticks=3561/0, in_queue=3561, util=96.33% 00:39:55.662 nvme0n3: ios=9251/0, merge=0/0, ticks=2901/0, in_queue=2901, util=96.79% 00:39:55.662 nvme0n4: ios=70/0, merge=0/0, ticks=2830/0, in_queue=2830, util=96.75% 00:39:55.920 05:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:55.920 05:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:56.179 05:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:56.179 05:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:56.745 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:56.745 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:56.745 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:56.745 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:57.312 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:57.312 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2527460 00:39:57.312 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:57.312 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:57.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:57.312 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:57.312 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:39:57.312 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:57.312 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:57.312 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:57.312 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:57.312 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:39:57.312 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:57.312 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:57.312 nvmf hotplug test: fio failed as expected 00:39:57.312 05:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:57.570 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:57.570 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:57.570 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:57.570 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:57.570 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:57.570 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:57.570 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:57.570 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:57.570 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:57.570 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:57.570 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:57.570 rmmod nvme_tcp 00:39:57.570 rmmod nvme_fabrics 00:39:57.570 rmmod nvme_keyring 00:39:57.570 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:57.571 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:57.571 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:57.571 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 2525586 ']' 00:39:57.571 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 2525586 00:39:57.571 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2525586 ']' 00:39:57.571 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2525586 00:39:57.571 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:39:57.571 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:57.571 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2525586 00:39:57.571 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:57.571 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:57.571 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2525586' 00:39:57.571 killing process with pid 2525586 00:39:57.571 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2525586 00:39:57.571 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2525586 00:39:57.830 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:57.830 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:57.830 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:57.830 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:57.830 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:39:57.830 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:57.830 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:39:57.830 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:57.830 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:57.830 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:57.830 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:57.830 05:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:00.370 00:40:00.370 real 0m24.180s 00:40:00.370 user 1m7.700s 00:40:00.370 sys 0m10.106s 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:00.370 ************************************ 00:40:00.370 END TEST nvmf_fio_target 00:40:00.370 ************************************ 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:00.370 ************************************ 00:40:00.370 START TEST nvmf_bdevio 00:40:00.370 ************************************ 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:00.370 * Looking for test storage... 00:40:00.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1689 -- # lcov --version 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:40:00.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.370 --rc genhtml_branch_coverage=1 00:40:00.370 --rc genhtml_function_coverage=1 00:40:00.370 --rc genhtml_legend=1 00:40:00.370 --rc geninfo_all_blocks=1 00:40:00.370 --rc geninfo_unexecuted_blocks=1 00:40:00.370 00:40:00.370 ' 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:40:00.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.370 --rc genhtml_branch_coverage=1 00:40:00.370 --rc genhtml_function_coverage=1 00:40:00.370 --rc genhtml_legend=1 00:40:00.370 --rc geninfo_all_blocks=1 00:40:00.370 --rc geninfo_unexecuted_blocks=1 00:40:00.370 00:40:00.370 ' 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:40:00.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.370 --rc genhtml_branch_coverage=1 00:40:00.370 --rc genhtml_function_coverage=1 00:40:00.370 --rc genhtml_legend=1 00:40:00.370 --rc geninfo_all_blocks=1 00:40:00.370 --rc geninfo_unexecuted_blocks=1 00:40:00.370 00:40:00.370 ' 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:40:00.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.370 --rc genhtml_branch_coverage=1 00:40:00.370 --rc genhtml_function_coverage=1 00:40:00.370 --rc genhtml_legend=1 00:40:00.370 --rc geninfo_all_blocks=1 00:40:00.370 --rc geninfo_unexecuted_blocks=1 00:40:00.370 00:40:00.370 ' 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:00.370 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:00.371 05:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:02.278 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:02.279 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:02.279 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:02.279 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:02.279 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:02.279 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:02.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:02.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:40:02.539 00:40:02.539 --- 10.0.0.2 ping statistics --- 00:40:02.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:02.539 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:02.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:02.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:40:02.539 00:40:02.539 --- 10.0.0.1 ping statistics --- 00:40:02.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:02.539 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=2530254 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:02.539 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 2530254 00:40:02.540 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2530254 ']' 00:40:02.540 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:02.540 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:02.540 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:02.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:02.540 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:02.540 05:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:02.540 [2024-10-28 05:15:52.967185] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:02.540 [2024-10-28 05:15:52.968329] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:40:02.540 [2024-10-28 05:15:52.968396] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:02.540 [2024-10-28 05:15:53.108414] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:02.799 [2024-10-28 05:15:53.151148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:02.799 [2024-10-28 05:15:53.203197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:02.799 [2024-10-28 05:15:53.203277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:02.799 [2024-10-28 05:15:53.203303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:02.799 [2024-10-28 05:15:53.203336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:02.799 [2024-10-28 05:15:53.203353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:02.799 [2024-10-28 05:15:53.205318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:02.799 [2024-10-28 05:15:53.205380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:02.799 [2024-10-28 05:15:53.205439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:02.799 [2024-10-28 05:15:53.205442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:02.799 [2024-10-28 05:15:53.305146] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:02.799 [2024-10-28 05:15:53.305376] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:02.799 [2024-10-28 05:15:53.305678] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:02.799 [2024-10-28 05:15:53.306351] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:02.799 [2024-10-28 05:15:53.306658] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:03.736 05:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:03.736 05:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:40:03.736 05:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:03.736 05:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:03.736 05:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:03.736 [2024-10-28 05:15:54.006230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:03.736 Malloc0 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:03.736 [2024-10-28 05:15:54.070422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:03.736 { 00:40:03.736 "params": { 00:40:03.736 "name": "Nvme$subsystem", 00:40:03.736 "trtype": "$TEST_TRANSPORT", 00:40:03.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:03.736 "adrfam": "ipv4", 00:40:03.736 "trsvcid": "$NVMF_PORT", 00:40:03.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:03.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:03.736 "hdgst": ${hdgst:-false}, 00:40:03.736 "ddgst": ${ddgst:-false} 00:40:03.736 }, 00:40:03.736 "method": "bdev_nvme_attach_controller" 00:40:03.736 } 00:40:03.736 EOF 00:40:03.736 )") 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:40:03.736 05:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:03.736 "params": { 00:40:03.736 "name": "Nvme1", 00:40:03.736 "trtype": "tcp", 00:40:03.736 "traddr": "10.0.0.2", 00:40:03.736 "adrfam": "ipv4", 00:40:03.736 "trsvcid": "4420", 00:40:03.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:03.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:03.736 "hdgst": false, 00:40:03.736 "ddgst": false 00:40:03.736 }, 00:40:03.736 "method": "bdev_nvme_attach_controller" 00:40:03.736 }' 00:40:03.736 [2024-10-28 05:15:54.118860] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:40:03.736 [2024-10-28 05:15:54.118956] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2530408 ] 00:40:03.736 [2024-10-28 05:15:54.252745] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:03.736 [2024-10-28 05:15:54.290505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:03.994 [2024-10-28 05:15:54.341045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:03.994 [2024-10-28 05:15:54.341097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:03.994 [2024-10-28 05:15:54.341101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:04.253 I/O targets: 00:40:04.253 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:04.253 00:40:04.253 00:40:04.253 CUnit - A unit testing framework for C - Version 2.1-3 00:40:04.253 http://cunit.sourceforge.net/ 00:40:04.253 00:40:04.253 00:40:04.253 Suite: bdevio tests on: Nvme1n1 00:40:04.253 Test: blockdev write read block ...passed 00:40:04.253 Test: blockdev write zeroes read block ...passed 00:40:04.253 Test: blockdev write zeroes read no split ...passed 00:40:04.253 Test: blockdev write zeroes read split ...passed 00:40:04.253 Test: blockdev write zeroes read split partial ...passed 00:40:04.253 Test: blockdev reset ...[2024-10-28 05:15:54.828546] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:40:04.253 [2024-10-28 05:15:54.828665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149e600 (9): Bad file descriptor 00:40:04.510 [2024-10-28 05:15:54.962750] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:40:04.510 passed 00:40:04.510 Test: blockdev write read 8 blocks ...passed 00:40:04.510 Test: blockdev write read size > 128k ...passed 00:40:04.510 Test: blockdev write read invalid size ...passed 00:40:04.510 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:04.510 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:04.510 Test: blockdev write read max offset ...passed 00:40:04.768 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:04.768 Test: blockdev writev readv 8 blocks ...passed 00:40:04.768 Test: blockdev writev readv 30 x 1block ...passed 00:40:04.768 Test: blockdev writev readv block ...passed 00:40:04.768 Test: blockdev writev readv size > 128k ...passed 00:40:04.768 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:04.768 Test: blockdev comparev and writev ...[2024-10-28 05:15:55.178432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:04.768 [2024-10-28 05:15:55.178475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:04.768 [2024-10-28 05:15:55.178503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:04.768 [2024-10-28 05:15:55.178520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:04.768 [2024-10-28 05:15:55.178937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:04.768 [2024-10-28 05:15:55.178961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:04.768 [2024-10-28 05:15:55.178983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:04.768 [2024-10-28 05:15:55.179008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:04.768 [2024-10-28 05:15:55.179433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:04.768 [2024-10-28 05:15:55.179458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:04.768 [2024-10-28 05:15:55.179480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:04.768 [2024-10-28 05:15:55.179495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:04.768 [2024-10-28 05:15:55.179905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:04.768 [2024-10-28 05:15:55.179930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:04.768 [2024-10-28 05:15:55.179952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:04.768 [2024-10-28 05:15:55.179968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:04.768 passed 00:40:04.768 Test: blockdev nvme passthru rw ...passed 00:40:04.768 Test: blockdev nvme passthru vendor specific ...[2024-10-28 05:15:55.262948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:04.768 [2024-10-28 05:15:55.262976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:04.768 [2024-10-28 05:15:55.263157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:04.768 [2024-10-28 05:15:55.263181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:04.768 [2024-10-28 05:15:55.263354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:04.768 [2024-10-28 05:15:55.263377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:04.768 [2024-10-28 05:15:55.263550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:04.768 [2024-10-28 05:15:55.263573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:04.768 passed 00:40:04.768 Test: blockdev nvme admin passthru ...passed 00:40:04.768 Test: blockdev copy ...passed 00:40:04.768 00:40:04.768 Run Summary: Type Total Ran Passed Failed Inactive 00:40:04.768 suites 1 1 n/a 0 0 00:40:04.769 tests 23 23 23 0 0 00:40:04.769 asserts 152 152 152 0 n/a 00:40:04.769 00:40:04.769 Elapsed time = 1.376 seconds 00:40:05.025 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:05.025 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.025 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:05.025 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.025 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:05.025 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:05.025 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:05.025 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:05.025 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:05.025 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:05.025 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:05.025 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:05.025 rmmod nvme_tcp 00:40:05.025 rmmod nvme_fabrics 00:40:05.026 rmmod nvme_keyring 00:40:05.026 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:05.026 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:05.026 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:05.026 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 2530254 ']' 00:40:05.026 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 2530254 00:40:05.026 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2530254 ']' 00:40:05.026 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2530254 00:40:05.026 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:40:05.026 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:05.026 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2530254 00:40:05.026 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:40:05.026 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:40:05.026 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2530254' 00:40:05.026 killing process with pid 2530254 00:40:05.026 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2530254 00:40:05.026 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2530254 00:40:05.284 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:05.284 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:05.284 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:05.284 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:05.284 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:40:05.284 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:05.284 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:40:05.284 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:05.284 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:05.284 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:05.284 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:05.284 05:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:07.822 05:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:07.822 00:40:07.822 real 0m7.433s 00:40:07.822 user 0m9.951s 00:40:07.822 sys 0m2.667s 00:40:07.822 05:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:07.822 05:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:07.822 ************************************ 00:40:07.822 END TEST nvmf_bdevio 00:40:07.822 ************************************ 00:40:07.822 05:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:07.822 00:40:07.822 real 4m3.893s 00:40:07.822 user 8m54.890s 00:40:07.822 sys 1m25.534s 00:40:07.822 05:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:07.822 05:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:07.822 ************************************ 00:40:07.822 END TEST nvmf_target_core_interrupt_mode 00:40:07.822 ************************************ 00:40:07.822 05:15:57 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:07.822 05:15:57 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:07.822 05:15:57 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:07.822 05:15:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:07.822 ************************************ 00:40:07.822 START TEST nvmf_interrupt 00:40:07.822 ************************************ 00:40:07.822 05:15:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:07.822 * Looking for test storage... 00:40:07.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1689 -- # lcov --version 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:40:07.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:07.823 --rc genhtml_branch_coverage=1 00:40:07.823 --rc genhtml_function_coverage=1 00:40:07.823 --rc genhtml_legend=1 00:40:07.823 --rc geninfo_all_blocks=1 00:40:07.823 --rc geninfo_unexecuted_blocks=1 00:40:07.823 00:40:07.823 ' 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:40:07.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:07.823 --rc genhtml_branch_coverage=1 00:40:07.823 --rc genhtml_function_coverage=1 00:40:07.823 --rc genhtml_legend=1 00:40:07.823 --rc geninfo_all_blocks=1 00:40:07.823 --rc geninfo_unexecuted_blocks=1 00:40:07.823 00:40:07.823 ' 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:40:07.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:07.823 --rc genhtml_branch_coverage=1 00:40:07.823 --rc genhtml_function_coverage=1 00:40:07.823 --rc genhtml_legend=1 00:40:07.823 --rc geninfo_all_blocks=1 00:40:07.823 --rc geninfo_unexecuted_blocks=1 00:40:07.823 00:40:07.823 ' 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:40:07.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:07.823 --rc genhtml_branch_coverage=1 00:40:07.823 --rc genhtml_function_coverage=1 00:40:07.823 --rc genhtml_legend=1 00:40:07.823 --rc geninfo_all_blocks=1 00:40:07.823 --rc geninfo_unexecuted_blocks=1 00:40:07.823 00:40:07.823 ' 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:07.823 05:15:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:09.725 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:09.725 05:15:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:09.725 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:09.725 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:09.725 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:09.725 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:09.725 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:09.725 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:09.725 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:09.726 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:09.726 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:09.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:09.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:40:09.726 00:40:09.726 --- 10.0.0.2 ping statistics --- 00:40:09.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:09.726 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:09.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:09.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:40:09.726 00:40:09.726 --- 10.0.0.1 ping statistics --- 00:40:09.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:09.726 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=2532479 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 2532479 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 2532479 ']' 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:09.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:09.726 05:16:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:09.726 [2024-10-28 05:16:00.221310] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:09.726 [2024-10-28 05:16:00.222487] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:40:09.726 [2024-10-28 05:16:00.222550] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:09.986 [2024-10-28 05:16:00.362754] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:09.986 [2024-10-28 05:16:00.405718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:09.986 [2024-10-28 05:16:00.453947] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:09.986 [2024-10-28 05:16:00.454014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:09.986 [2024-10-28 05:16:00.454037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:09.986 [2024-10-28 05:16:00.454051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:09.986 [2024-10-28 05:16:00.454063] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:09.986 [2024-10-28 05:16:00.455498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:09.986 [2024-10-28 05:16:00.455504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:09.986 [2024-10-28 05:16:00.553590] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:09.986 [2024-10-28 05:16:00.553671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:09.986 [2024-10-28 05:16:00.553902] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:10.922 5000+0 records in 00:40:10.922 5000+0 records out 00:40:10.922 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0139983 s, 732 MB/s 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.922 AIO0 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.922 [2024-10-28 05:16:01.304199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:10.922 [2024-10-28 05:16:01.328357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2532479 0 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2532479 0 idle 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2532479 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2532479 -w 256 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2532479 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.29 reactor_0' 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2532479 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.29 reactor_0 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:10.922 05:16:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:10.923 05:16:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2532479 1 00:40:10.923 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2532479 1 idle 00:40:10.923 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2532479 00:40:10.923 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:10.923 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:10.923 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:10.923 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:10.923 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:10.923 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:10.923 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:10.923 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:10.923 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:10.923 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2532479 -w 256 00:40:10.923 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2532560 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1' 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2532560 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2532761 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2532479 0 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2532479 0 busy 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2532479 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2532479 -w 256 00:40:11.181 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:11.439 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2532479 root 20 0 128.2g 47616 34560 S 6.2 0.1 0:00.30 reactor_0' 00:40:11.439 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2532479 root 20 0 128.2g 47616 34560 S 6.2 0.1 0:00.30 reactor_0 00:40:11.439 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:11.439 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:11.439 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:40:11.439 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:40:11.439 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:11.439 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:11.439 05:16:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:40:12.372 05:16:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:40:12.372 05:16:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:12.372 05:16:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2532479 -w 256 00:40:12.372 05:16:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:12.637 05:16:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2532479 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:02.54 reactor_0' 00:40:12.637 05:16:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2532479 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:02.54 reactor_0 00:40:12.637 05:16:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:12.637 05:16:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2532479 1 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2532479 1 busy 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2532479 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2532479 -w 256 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2532560 root 20 0 128.2g 48000 34560 R 93.8 0.1 0:01.29 reactor_1' 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2532560 root 20 0 128.2g 48000 34560 R 93.8 0.1 0:01.29 reactor_1 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:12.637 05:16:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2532761 00:40:22.616 Initializing NVMe Controllers 00:40:22.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:22.616 Controller IO queue size 256, less than required. 00:40:22.616 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:22.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:22.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:22.616 Initialization complete. Launching workers. 00:40:22.616 ======================================================== 00:40:22.616 Latency(us) 00:40:22.616 Device Information : IOPS MiB/s Average min max 00:40:22.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 14201.30 55.47 18038.29 4460.29 21837.60 00:40:22.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13579.40 53.04 18866.86 3918.67 21731.43 00:40:22.616 ======================================================== 00:40:22.616 Total : 27780.70 108.52 18443.30 3918.67 21837.60 00:40:22.616 00:40:22.616 05:16:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:22.616 05:16:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2532479 0 00:40:22.616 05:16:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2532479 0 idle 00:40:22.616 05:16:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2532479 00:40:22.616 05:16:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:22.616 05:16:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:22.616 05:16:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:22.616 05:16:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:22.616 05:16:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:22.616 05:16:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:22.616 05:16:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:22.616 05:16:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:22.616 05:16:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:22.616 05:16:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2532479 -w 256 00:40:22.616 05:16:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2532479 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:20.20 reactor_0' 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2532479 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:20.20 reactor_0 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2532479 1 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2532479 1 idle 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2532479 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2532479 -w 256 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2532560 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:09.95 reactor_1' 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2532560 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:09.95 reactor_1 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:40:22.616 05:16:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:40:23.996 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2532479 0 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2532479 0 idle 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2532479 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2532479 -w 256 00:40:23.997 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:24.256 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2532479 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:20.31 reactor_0' 00:40:24.256 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2532479 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:20.31 reactor_0 00:40:24.256 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:24.256 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:24.256 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:24.256 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2532479 1 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2532479 1 idle 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2532479 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2532479 -w 256 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2532560 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:09.99 reactor_1' 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2532560 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:09.99 reactor_1 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:24.257 05:16:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:24.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:24.515 rmmod nvme_tcp 00:40:24.515 rmmod nvme_fabrics 00:40:24.515 rmmod nvme_keyring 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 2532479 ']' 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 2532479 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 2532479 ']' 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 2532479 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:24.515 05:16:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2532479 00:40:24.515 05:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:24.515 05:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:24.515 05:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2532479' 00:40:24.515 killing process with pid 2532479 00:40:24.515 05:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 2532479 00:40:24.515 05:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 2532479 00:40:24.773 05:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:24.773 05:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:24.773 05:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:24.773 05:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:24.773 05:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:40:24.773 05:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:24.773 05:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:40:24.773 05:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:24.773 05:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:24.773 05:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:24.773 05:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:24.773 05:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:27.306 05:16:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:27.306 00:40:27.306 real 0m19.336s 00:40:27.306 user 0m37.017s 00:40:27.306 sys 0m6.548s 00:40:27.306 05:16:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:27.306 05:16:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:27.306 ************************************ 00:40:27.306 END TEST nvmf_interrupt 00:40:27.306 ************************************ 00:40:27.306 00:40:27.306 real 34m15.025s 00:40:27.306 user 90m13.877s 00:40:27.306 sys 8m9.802s 00:40:27.306 05:16:17 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:27.306 05:16:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.306 ************************************ 00:40:27.306 END TEST nvmf_tcp 00:40:27.306 ************************************ 00:40:27.306 05:16:17 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:40:27.306 05:16:17 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:27.306 05:16:17 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:27.306 05:16:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:27.306 05:16:17 -- common/autotest_common.sh@10 -- # set +x 00:40:27.306 ************************************ 00:40:27.306 START TEST spdkcli_nvmf_tcp 00:40:27.306 ************************************ 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:27.306 * Looking for test storage... 00:40:27.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1689 -- # lcov --version 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:40:27.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.306 --rc genhtml_branch_coverage=1 00:40:27.306 --rc genhtml_function_coverage=1 00:40:27.306 --rc genhtml_legend=1 00:40:27.306 --rc geninfo_all_blocks=1 00:40:27.306 --rc geninfo_unexecuted_blocks=1 00:40:27.306 00:40:27.306 ' 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:40:27.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.306 --rc genhtml_branch_coverage=1 00:40:27.306 --rc genhtml_function_coverage=1 00:40:27.306 --rc genhtml_legend=1 00:40:27.306 --rc geninfo_all_blocks=1 00:40:27.306 --rc geninfo_unexecuted_blocks=1 00:40:27.306 00:40:27.306 ' 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:40:27.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.306 --rc genhtml_branch_coverage=1 00:40:27.306 --rc genhtml_function_coverage=1 00:40:27.306 --rc genhtml_legend=1 00:40:27.306 --rc geninfo_all_blocks=1 00:40:27.306 --rc geninfo_unexecuted_blocks=1 00:40:27.306 00:40:27.306 ' 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:40:27.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.306 --rc genhtml_branch_coverage=1 00:40:27.306 --rc genhtml_function_coverage=1 00:40:27.306 --rc genhtml_legend=1 00:40:27.306 --rc geninfo_all_blocks=1 00:40:27.306 --rc geninfo_unexecuted_blocks=1 00:40:27.306 00:40:27.306 ' 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:27.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:27.306 05:16:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:27.307 05:16:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:27.307 05:16:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:27.307 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:27.307 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.307 05:16:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:27.307 05:16:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2534677 00:40:27.307 05:16:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:27.307 05:16:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2534677 00:40:27.307 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2534677 ']' 00:40:27.307 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:27.307 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:27.307 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:27.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:27.307 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:27.307 05:16:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.307 [2024-10-28 05:16:17.610461] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:40:27.307 [2024-10-28 05:16:17.610552] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2534677 ] 00:40:27.307 [2024-10-28 05:16:17.743713] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:27.307 [2024-10-28 05:16:17.781673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:27.307 [2024-10-28 05:16:17.833868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:27.307 [2024-10-28 05:16:17.833873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.241 05:16:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:28.241 05:16:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:40:28.241 05:16:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:28.241 05:16:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:28.241 05:16:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:28.241 05:16:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:28.241 05:16:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:28.241 05:16:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:28.241 05:16:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:28.241 05:16:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:28.241 05:16:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:28.241 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:28.241 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:28.241 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:28.241 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:28.241 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:28.241 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:28.241 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:28.242 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:28.242 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:28.242 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:28.242 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:28.242 ' 00:40:30.775 [2024-10-28 05:16:21.302148] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:32.150 [2024-10-28 05:16:22.583544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:34.681 [2024-10-28 05:16:24.957219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:36.582 [2024-10-28 05:16:27.002753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:38.015 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:38.015 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:38.015 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:38.015 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:38.015 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:38.015 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:38.015 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:38.015 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:38.015 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:38.015 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:38.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:38.015 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:38.273 05:16:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:38.273 05:16:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:38.273 05:16:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:38.273 05:16:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:38.273 05:16:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:38.273 05:16:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:38.273 05:16:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:38.273 05:16:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:38.841 05:16:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:38.841 05:16:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:38.841 05:16:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:38.841 05:16:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:38.841 05:16:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:38.841 05:16:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:38.841 05:16:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:38.841 05:16:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:38.841 05:16:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:38.841 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:38.841 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:38.841 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:38.841 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:38.841 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:38.841 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:38.841 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:38.841 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:38.841 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:38.841 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:38.841 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:38.841 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:38.841 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:38.841 ' 00:40:44.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:44.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:44.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:44.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:44.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:44.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:44.105 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:44.105 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:44.105 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:44.105 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:44.105 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:44.105 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:44.105 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:44.105 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:44.363 05:16:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:44.363 05:16:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:44.363 05:16:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:44.363 05:16:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2534677 00:40:44.363 05:16:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2534677 ']' 00:40:44.363 05:16:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2534677 00:40:44.363 05:16:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:40:44.363 05:16:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:44.363 05:16:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2534677 00:40:44.363 05:16:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:44.363 05:16:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:44.363 05:16:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2534677' 00:40:44.363 killing process with pid 2534677 00:40:44.363 05:16:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2534677 00:40:44.363 05:16:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2534677 00:40:44.621 05:16:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:44.622 05:16:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:44.622 05:16:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2534677 ']' 00:40:44.622 05:16:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2534677 00:40:44.622 05:16:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2534677 ']' 00:40:44.622 05:16:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2534677 00:40:44.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2534677) - No such process 00:40:44.622 05:16:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2534677 is not found' 00:40:44.622 Process with pid 2534677 is not found 00:40:44.622 05:16:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:44.622 05:16:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:44.622 05:16:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:44.622 00:40:44.622 real 0m17.661s 00:40:44.622 user 0m37.902s 00:40:44.622 sys 0m0.859s 00:40:44.622 05:16:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:44.622 05:16:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:44.622 ************************************ 00:40:44.622 END TEST spdkcli_nvmf_tcp 00:40:44.622 ************************************ 00:40:44.622 05:16:35 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:44.622 05:16:35 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:44.622 05:16:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:44.622 05:16:35 -- common/autotest_common.sh@10 -- # set +x 00:40:44.622 ************************************ 00:40:44.622 START TEST nvmf_identify_passthru 00:40:44.622 ************************************ 00:40:44.622 05:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:44.622 * Looking for test storage... 00:40:44.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:44.622 05:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:40:44.622 05:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1689 -- # lcov --version 00:40:44.622 05:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:40:44.622 05:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:44.622 05:16:35 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:44.622 05:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:44.622 05:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:40:44.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:44.622 --rc genhtml_branch_coverage=1 00:40:44.622 --rc genhtml_function_coverage=1 00:40:44.622 --rc genhtml_legend=1 00:40:44.622 --rc geninfo_all_blocks=1 00:40:44.622 --rc geninfo_unexecuted_blocks=1 00:40:44.622 00:40:44.622 ' 00:40:44.622 05:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:40:44.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:44.622 --rc genhtml_branch_coverage=1 00:40:44.622 --rc genhtml_function_coverage=1 00:40:44.622 --rc genhtml_legend=1 00:40:44.622 --rc geninfo_all_blocks=1 00:40:44.622 --rc geninfo_unexecuted_blocks=1 00:40:44.622 00:40:44.622 ' 00:40:44.622 05:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:40:44.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:44.622 --rc genhtml_branch_coverage=1 00:40:44.622 --rc genhtml_function_coverage=1 00:40:44.622 --rc genhtml_legend=1 00:40:44.622 --rc geninfo_all_blocks=1 00:40:44.622 --rc geninfo_unexecuted_blocks=1 00:40:44.622 00:40:44.622 ' 00:40:44.880 05:16:35 nvmf_identify_passthru -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:40:44.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:44.880 --rc genhtml_branch_coverage=1 00:40:44.880 --rc genhtml_function_coverage=1 00:40:44.880 --rc genhtml_legend=1 00:40:44.880 --rc geninfo_all_blocks=1 00:40:44.880 --rc geninfo_unexecuted_blocks=1 00:40:44.880 00:40:44.880 ' 00:40:44.880 05:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:44.880 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:44.880 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:44.880 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:44.880 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:44.880 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:44.880 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:44.880 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:44.880 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:44.880 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:44.880 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:44.880 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:44.881 05:16:35 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:44.881 05:16:35 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:44.881 05:16:35 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:44.881 05:16:35 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:44.881 05:16:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.881 05:16:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.881 05:16:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.881 05:16:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:44.881 05:16:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:44.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:44.881 05:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:44.881 05:16:35 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:44.881 05:16:35 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:44.881 05:16:35 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:44.881 05:16:35 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:44.881 05:16:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.881 05:16:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.881 05:16:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.881 05:16:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:44.881 05:16:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.881 05:16:35 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:44.881 05:16:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:44.881 05:16:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:44.881 05:16:35 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:44.881 05:16:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:46.784 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:46.784 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:46.784 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:46.784 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:46.784 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:46.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:46.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:40:46.785 00:40:46.785 --- 10.0.0.2 ping statistics --- 00:40:46.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:46.785 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:46.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:46.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:40:46.785 00:40:46.785 --- 10.0.0.1 ping statistics --- 00:40:46.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:46.785 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:46.785 05:16:37 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:46.785 05:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:46.785 05:16:37 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:46.785 05:16:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:46.785 05:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:46.785 05:16:37 nvmf_identify_passthru -- common/autotest_common.sh@1505 -- # bdfs=() 00:40:46.785 05:16:37 nvmf_identify_passthru -- common/autotest_common.sh@1505 -- # local bdfs 00:40:46.785 05:16:37 nvmf_identify_passthru -- common/autotest_common.sh@1506 -- # bdfs=($(get_nvme_bdfs)) 00:40:46.785 05:16:37 nvmf_identify_passthru -- common/autotest_common.sh@1506 -- # get_nvme_bdfs 00:40:47.043 05:16:37 nvmf_identify_passthru -- common/autotest_common.sh@1494 -- # bdfs=() 00:40:47.043 05:16:37 nvmf_identify_passthru -- common/autotest_common.sh@1494 -- # local bdfs 00:40:47.043 05:16:37 nvmf_identify_passthru -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:47.043 05:16:37 nvmf_identify_passthru -- common/autotest_common.sh@1495 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:47.043 05:16:37 nvmf_identify_passthru -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:40:47.043 05:16:37 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # (( 1 == 0 )) 00:40:47.043 05:16:37 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:88:00.0 00:40:47.043 05:16:37 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # echo 0000:88:00.0 00:40:47.043 05:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:40:47.043 05:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:40:47.044 05:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:47.044 05:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:47.044 05:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:51.232 05:16:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:40:51.232 05:16:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:51.232 05:16:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:51.232 05:16:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:56.502 05:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:56.502 05:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:56.502 05:16:46 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:56.502 05:16:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:56.502 05:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:56.502 05:16:46 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:56.502 05:16:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:56.502 05:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2539281 00:40:56.502 05:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:56.502 05:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:56.502 05:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2539281 00:40:56.502 05:16:46 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2539281 ']' 00:40:56.502 05:16:46 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:56.502 05:16:46 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:56.502 05:16:46 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:56.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:56.502 05:16:46 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:56.502 05:16:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:56.502 [2024-10-28 05:16:46.172561] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:40:56.502 [2024-10-28 05:16:46.172665] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:56.502 [2024-10-28 05:16:46.312230] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:56.502 [2024-10-28 05:16:46.347528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:56.502 [2024-10-28 05:16:46.395993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:56.502 [2024-10-28 05:16:46.396057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:56.502 [2024-10-28 05:16:46.396074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:56.502 [2024-10-28 05:16:46.396088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:56.502 [2024-10-28 05:16:46.396099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:56.502 [2024-10-28 05:16:46.397849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:56.502 [2024-10-28 05:16:46.397910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:56.502 [2024-10-28 05:16:46.397975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:56.502 [2024-10-28 05:16:46.397977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:56.761 05:16:47 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:56.761 05:16:47 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:40:56.761 05:16:47 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:56.761 05:16:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.761 05:16:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:56.761 INFO: Log level set to 20 00:40:56.761 INFO: Requests: 00:40:56.761 { 00:40:56.761 "jsonrpc": "2.0", 00:40:56.761 "method": "nvmf_set_config", 00:40:56.761 "id": 1, 00:40:56.761 "params": { 00:40:56.761 "admin_cmd_passthru": { 00:40:56.761 "identify_ctrlr": true 00:40:56.761 } 00:40:56.761 } 00:40:56.761 } 00:40:56.761 00:40:56.761 INFO: response: 00:40:56.761 { 00:40:56.761 "jsonrpc": "2.0", 00:40:56.761 "id": 1, 00:40:56.761 "result": true 00:40:56.761 } 00:40:56.761 00:40:56.761 05:16:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.761 05:16:47 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:56.761 05:16:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.761 05:16:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:56.761 INFO: Setting log level to 20 00:40:56.761 INFO: Setting log level to 20 00:40:56.761 INFO: Log level set to 20 00:40:56.761 INFO: Log level set to 20 00:40:56.761 INFO: Requests: 00:40:56.761 { 00:40:56.761 "jsonrpc": "2.0", 00:40:56.761 "method": "framework_start_init", 00:40:56.761 "id": 1 00:40:56.761 } 00:40:56.761 00:40:56.761 INFO: Requests: 00:40:56.761 { 00:40:56.761 "jsonrpc": "2.0", 00:40:56.761 "method": "framework_start_init", 00:40:56.761 "id": 1 00:40:56.761 } 00:40:56.761 00:40:56.761 [2024-10-28 05:16:47.263024] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:56.761 INFO: response: 00:40:56.761 { 00:40:56.761 "jsonrpc": "2.0", 00:40:56.761 "id": 1, 00:40:56.761 "result": true 00:40:56.761 } 00:40:56.761 00:40:56.761 INFO: response: 00:40:56.761 { 00:40:56.761 "jsonrpc": "2.0", 00:40:56.761 "id": 1, 00:40:56.761 "result": true 00:40:56.761 } 00:40:56.761 00:40:56.761 05:16:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.761 05:16:47 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:56.761 05:16:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.761 05:16:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:56.761 INFO: Setting log level to 40 00:40:56.761 INFO: Setting log level to 40 00:40:56.761 INFO: Setting log level to 40 00:40:56.761 [2024-10-28 05:16:47.272931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:56.761 05:16:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.761 05:16:47 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:56.761 05:16:47 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:56.761 05:16:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:56.761 05:16:47 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:40:56.761 05:16:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.761 05:16:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:00.044 Nvme0n1 00:41:00.044 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.044 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:41:00.044 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.044 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:00.044 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.044 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:00.044 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.044 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:00.044 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.044 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:00.044 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.044 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:00.044 [2024-10-28 05:16:50.160752] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:00.044 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.044 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:41:00.044 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.044 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:00.044 [ 00:41:00.044 { 00:41:00.044 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:00.044 "subtype": "Discovery", 00:41:00.044 "listen_addresses": [], 00:41:00.044 "allow_any_host": true, 00:41:00.044 "hosts": [] 00:41:00.044 }, 00:41:00.044 { 00:41:00.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:00.044 "subtype": "NVMe", 00:41:00.044 "listen_addresses": [ 00:41:00.044 { 00:41:00.044 "trtype": "TCP", 00:41:00.044 "adrfam": "IPv4", 00:41:00.044 "traddr": "10.0.0.2", 00:41:00.044 "trsvcid": "4420" 00:41:00.044 } 00:41:00.044 ], 00:41:00.044 "allow_any_host": true, 00:41:00.044 "hosts": [], 00:41:00.044 "serial_number": "SPDK00000000000001", 00:41:00.044 "model_number": "SPDK bdev Controller", 00:41:00.044 "max_namespaces": 1, 00:41:00.044 "min_cntlid": 1, 00:41:00.044 "max_cntlid": 65519, 00:41:00.044 "namespaces": [ 00:41:00.044 { 00:41:00.044 "nsid": 1, 00:41:00.044 "bdev_name": "Nvme0n1", 00:41:00.044 "name": "Nvme0n1", 00:41:00.044 "nguid": "2F44097326CF4151A4D71FF8795B7B55", 00:41:00.044 "uuid": "2f440973-26cf-4151-a4d7-1ff8795b7b55" 00:41:00.044 } 00:41:00.044 ] 00:41:00.044 } 00:41:00.044 ] 00:41:00.044 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.044 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:00.044 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:41:00.044 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:41:00.044 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:41:00.044 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:00.044 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:41:00.044 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:41:00.303 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:41:00.303 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:41:00.303 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:41:00.303 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:00.303 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.303 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:00.303 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.303 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:41:00.303 05:16:50 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:41:00.303 05:16:50 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:00.303 05:16:50 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:41:00.303 05:16:50 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:00.303 05:16:50 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:41:00.303 05:16:50 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:00.303 05:16:50 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:00.303 rmmod nvme_tcp 00:41:00.303 rmmod nvme_fabrics 00:41:00.303 rmmod nvme_keyring 00:41:00.303 05:16:50 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:00.303 05:16:50 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:41:00.303 05:16:50 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:41:00.303 05:16:50 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 2539281 ']' 00:41:00.303 05:16:50 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 2539281 00:41:00.303 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2539281 ']' 00:41:00.303 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2539281 00:41:00.303 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:41:00.303 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:00.304 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2539281 00:41:00.304 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:00.304 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:00.304 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2539281' 00:41:00.304 killing process with pid 2539281 00:41:00.304 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2539281 00:41:00.304 05:16:50 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2539281 00:41:02.204 05:16:52 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:02.204 05:16:52 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:02.204 05:16:52 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:02.204 05:16:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:41:02.204 05:16:52 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:41:02.204 05:16:52 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:02.204 05:16:52 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:41:02.204 05:16:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:02.204 05:16:52 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:02.204 05:16:52 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:02.204 05:16:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:02.204 05:16:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:04.106 05:16:54 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:04.106 00:41:04.106 real 0m19.361s 00:41:04.106 user 0m29.997s 00:41:04.106 sys 0m3.231s 00:41:04.106 05:16:54 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:04.106 05:16:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:04.106 ************************************ 00:41:04.106 END TEST nvmf_identify_passthru 00:41:04.106 ************************************ 00:41:04.106 05:16:54 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:04.106 05:16:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:04.106 05:16:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:04.106 05:16:54 -- common/autotest_common.sh@10 -- # set +x 00:41:04.106 ************************************ 00:41:04.106 START TEST nvmf_dif 00:41:04.106 ************************************ 00:41:04.106 05:16:54 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:04.106 * Looking for test storage... 00:41:04.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:04.106 05:16:54 nvmf_dif -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:41:04.106 05:16:54 nvmf_dif -- common/autotest_common.sh@1689 -- # lcov --version 00:41:04.106 05:16:54 nvmf_dif -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:41:04.106 05:16:54 nvmf_dif -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:04.106 05:16:54 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:04.106 05:16:54 nvmf_dif -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:04.106 05:16:54 nvmf_dif -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:41:04.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.106 --rc genhtml_branch_coverage=1 00:41:04.106 --rc genhtml_function_coverage=1 00:41:04.106 --rc genhtml_legend=1 00:41:04.106 --rc geninfo_all_blocks=1 00:41:04.106 --rc geninfo_unexecuted_blocks=1 00:41:04.106 00:41:04.106 ' 00:41:04.106 05:16:54 nvmf_dif -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:41:04.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.106 --rc genhtml_branch_coverage=1 00:41:04.106 --rc genhtml_function_coverage=1 00:41:04.106 --rc genhtml_legend=1 00:41:04.106 --rc geninfo_all_blocks=1 00:41:04.106 --rc geninfo_unexecuted_blocks=1 00:41:04.106 00:41:04.106 ' 00:41:04.106 05:16:54 nvmf_dif -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:41:04.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.106 --rc genhtml_branch_coverage=1 00:41:04.106 --rc genhtml_function_coverage=1 00:41:04.106 --rc genhtml_legend=1 00:41:04.106 --rc geninfo_all_blocks=1 00:41:04.106 --rc geninfo_unexecuted_blocks=1 00:41:04.106 00:41:04.106 ' 00:41:04.106 05:16:54 nvmf_dif -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:41:04.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.106 --rc genhtml_branch_coverage=1 00:41:04.106 --rc genhtml_function_coverage=1 00:41:04.106 --rc genhtml_legend=1 00:41:04.106 --rc geninfo_all_blocks=1 00:41:04.106 --rc geninfo_unexecuted_blocks=1 00:41:04.106 00:41:04.106 ' 00:41:04.106 05:16:54 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:04.106 05:16:54 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:04.106 05:16:54 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:04.106 05:16:54 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:04.106 05:16:54 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:04.106 05:16:54 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:04.106 05:16:54 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:04.106 05:16:54 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:04.106 05:16:54 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:04.106 05:16:54 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:04.106 05:16:54 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:04.106 05:16:54 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:04.106 05:16:54 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:04.107 05:16:54 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:04.107 05:16:54 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:04.107 05:16:54 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:04.107 05:16:54 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:04.107 05:16:54 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.107 05:16:54 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.107 05:16:54 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.107 05:16:54 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:04.107 05:16:54 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:04.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:04.107 05:16:54 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:04.107 05:16:54 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:04.107 05:16:54 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:04.107 05:16:54 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:04.107 05:16:54 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:04.107 05:16:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:04.107 05:16:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:04.107 05:16:54 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:41:04.107 05:16:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:06.641 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:06.641 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:06.641 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:06.641 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:06.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:06.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:41:06.641 00:41:06.641 --- 10.0.0.2 ping statistics --- 00:41:06.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:06.641 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:06.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:06.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:41:06.641 00:41:06.641 --- 10.0.0.1 ping statistics --- 00:41:06.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:06.641 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:41:06.641 05:16:56 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:07.584 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:07.584 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:41:07.584 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:07.584 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:07.584 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:07.584 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:07.584 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:07.584 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:07.584 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:07.584 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:07.584 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:07.584 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:07.584 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:07.584 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:07.584 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:07.584 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:07.584 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:07.584 05:16:58 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:07.584 05:16:58 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:07.584 05:16:58 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:07.584 05:16:58 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:07.584 05:16:58 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:07.584 05:16:58 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:07.584 05:16:58 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:07.584 05:16:58 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:07.584 05:16:58 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:07.584 05:16:58 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:07.584 05:16:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.584 05:16:58 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=2542642 00:41:07.584 05:16:58 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:07.584 05:16:58 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 2542642 00:41:07.584 05:16:58 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2542642 ']' 00:41:07.584 05:16:58 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:07.584 05:16:58 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:07.585 05:16:58 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:07.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:07.585 05:16:58 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:07.585 05:16:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.585 [2024-10-28 05:16:58.103177] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:41:07.585 [2024-10-28 05:16:58.103258] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:07.843 [2024-10-28 05:16:58.247235] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:07.843 [2024-10-28 05:16:58.287541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.843 [2024-10-28 05:16:58.336517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:07.843 [2024-10-28 05:16:58.336591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:07.843 [2024-10-28 05:16:58.336609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:07.843 [2024-10-28 05:16:58.336623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:07.843 [2024-10-28 05:16:58.336647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:07.843 [2024-10-28 05:16:58.337307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:08.103 05:16:58 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:08.103 05:16:58 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:41:08.103 05:16:58 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:08.103 05:16:58 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:08.103 05:16:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:08.103 05:16:58 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:08.103 05:16:58 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:08.103 05:16:58 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:08.103 05:16:58 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.103 05:16:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:08.103 [2024-10-28 05:16:58.493200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:08.103 05:16:58 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.103 05:16:58 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:08.103 05:16:58 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:08.103 05:16:58 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:08.103 05:16:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:08.103 ************************************ 00:41:08.103 START TEST fio_dif_1_default 00:41:08.103 ************************************ 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:08.103 bdev_null0 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:08.103 [2024-10-28 05:16:58.553384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:08.103 05:16:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:08.103 { 00:41:08.103 "params": { 00:41:08.103 "name": "Nvme$subsystem", 00:41:08.103 "trtype": "$TEST_TRANSPORT", 00:41:08.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:08.103 "adrfam": "ipv4", 00:41:08.103 "trsvcid": "$NVMF_PORT", 00:41:08.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:08.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:08.103 "hdgst": ${hdgst:-false}, 00:41:08.103 "ddgst": ${ddgst:-false} 00:41:08.103 }, 00:41:08.103 "method": "bdev_nvme_attach_controller" 00:41:08.103 } 00:41:08.103 EOF 00:41:08.103 )") 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:08.104 "params": { 00:41:08.104 "name": "Nvme0", 00:41:08.104 "trtype": "tcp", 00:41:08.104 "traddr": "10.0.0.2", 00:41:08.104 "adrfam": "ipv4", 00:41:08.104 "trsvcid": "4420", 00:41:08.104 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:08.104 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:08.104 "hdgst": false, 00:41:08.104 "ddgst": false 00:41:08.104 }, 00:41:08.104 "method": "bdev_nvme_attach_controller" 00:41:08.104 }' 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:08.104 05:16:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:08.362 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:08.363 fio-3.35 00:41:08.363 Starting 1 thread 00:41:20.563 00:41:20.563 filename0: (groupid=0, jobs=1): err= 0: pid=2542869: Mon Oct 28 05:17:09 2024 00:41:20.563 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10013msec) 00:41:20.563 slat (nsec): min=4138, max=49332, avg=9012.38, stdev=3433.58 00:41:20.563 clat (usec): min=40780, max=47731, avg=41007.39, stdev=440.32 00:41:20.563 lat (usec): min=40788, max=47751, avg=41016.41, stdev=440.36 00:41:20.563 clat percentiles (usec): 00:41:20.563 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:20.563 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:20.563 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:20.563 | 99.00th=[41681], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:41:20.563 | 99.99th=[47973] 00:41:20.563 bw ( KiB/s): min= 384, max= 416, per=99.51%, avg=388.80, stdev=11.72, samples=20 00:41:20.563 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:41:20.563 lat (msec) : 50=100.00% 00:41:20.563 cpu : usr=91.54%, sys=8.16%, ctx=17, majf=0, minf=181 00:41:20.563 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.563 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.563 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:20.563 00:41:20.563 Run status group 0 (all jobs): 00:41:20.563 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10013-10013msec 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.563 00:41:20.563 real 0m11.225s 00:41:20.563 user 0m10.191s 00:41:20.563 sys 0m1.140s 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:20.563 ************************************ 00:41:20.563 END TEST fio_dif_1_default 00:41:20.563 ************************************ 00:41:20.563 05:17:09 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:20.563 05:17:09 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:20.563 05:17:09 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:20.563 05:17:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:20.563 ************************************ 00:41:20.563 START TEST fio_dif_1_multi_subsystems 00:41:20.563 ************************************ 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.563 bdev_null0 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.563 [2024-10-28 05:17:09.820181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.563 bdev_null1 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:20.563 { 00:41:20.563 "params": { 00:41:20.563 "name": "Nvme$subsystem", 00:41:20.563 "trtype": "$TEST_TRANSPORT", 00:41:20.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.563 "adrfam": "ipv4", 00:41:20.563 "trsvcid": "$NVMF_PORT", 00:41:20.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.563 "hdgst": ${hdgst:-false}, 00:41:20.563 "ddgst": ${ddgst:-false} 00:41:20.563 }, 00:41:20.563 "method": "bdev_nvme_attach_controller" 00:41:20.563 } 00:41:20.563 EOF 00:41:20.563 )") 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.563 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:20.564 { 00:41:20.564 "params": { 00:41:20.564 "name": "Nvme$subsystem", 00:41:20.564 "trtype": "$TEST_TRANSPORT", 00:41:20.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.564 "adrfam": "ipv4", 00:41:20.564 "trsvcid": "$NVMF_PORT", 00:41:20.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.564 "hdgst": ${hdgst:-false}, 00:41:20.564 "ddgst": ${ddgst:-false} 00:41:20.564 }, 00:41:20.564 "method": "bdev_nvme_attach_controller" 00:41:20.564 } 00:41:20.564 EOF 00:41:20.564 )") 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:20.564 "params": { 00:41:20.564 "name": "Nvme0", 00:41:20.564 "trtype": "tcp", 00:41:20.564 "traddr": "10.0.0.2", 00:41:20.564 "adrfam": "ipv4", 00:41:20.564 "trsvcid": "4420", 00:41:20.564 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:20.564 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:20.564 "hdgst": false, 00:41:20.564 "ddgst": false 00:41:20.564 }, 00:41:20.564 "method": "bdev_nvme_attach_controller" 00:41:20.564 },{ 00:41:20.564 "params": { 00:41:20.564 "name": "Nvme1", 00:41:20.564 "trtype": "tcp", 00:41:20.564 "traddr": "10.0.0.2", 00:41:20.564 "adrfam": "ipv4", 00:41:20.564 "trsvcid": "4420", 00:41:20.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:20.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:20.564 "hdgst": false, 00:41:20.564 "ddgst": false 00:41:20.564 }, 00:41:20.564 "method": "bdev_nvme_attach_controller" 00:41:20.564 }' 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:20.564 05:17:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.564 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:20.564 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:20.564 fio-3.35 00:41:20.564 Starting 2 threads 00:41:30.535 00:41:30.535 filename0: (groupid=0, jobs=1): err= 0: pid=2544245: Mon Oct 28 05:17:21 2024 00:41:30.535 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:41:30.535 slat (nsec): min=4040, max=51817, avg=9542.43, stdev=3937.00 00:41:30.535 clat (usec): min=40873, max=43015, avg=40998.46, stdev=192.14 00:41:30.535 lat (usec): min=40882, max=43029, avg=41008.00, stdev=192.10 00:41:30.535 clat percentiles (usec): 00:41:30.535 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:30.535 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:30.535 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:30.535 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:41:30.535 | 99.99th=[43254] 00:41:30.535 bw ( KiB/s): min= 384, max= 416, per=49.75%, avg=388.80, stdev=11.72, samples=20 00:41:30.535 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:41:30.535 lat (msec) : 50=100.00% 00:41:30.535 cpu : usr=95.13%, sys=4.56%, ctx=18, majf=0, minf=210 00:41:30.535 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:30.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.535 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.535 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:30.536 filename1: (groupid=0, jobs=1): err= 0: pid=2544246: Mon Oct 28 05:17:21 2024 00:41:30.536 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:41:30.536 slat (nsec): min=6908, max=34487, avg=9404.13, stdev=3553.10 00:41:30.536 clat (usec): min=40885, max=43881, avg=41002.25, stdev=224.80 00:41:30.536 lat (usec): min=40893, max=43902, avg=41011.66, stdev=224.95 00:41:30.536 clat percentiles (usec): 00:41:30.536 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:30.536 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:30.536 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:30.536 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:41:30.536 | 99.99th=[43779] 00:41:30.536 bw ( KiB/s): min= 384, max= 416, per=49.75%, avg=388.80, stdev=11.72, samples=20 00:41:30.536 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:41:30.536 lat (msec) : 50=100.00% 00:41:30.536 cpu : usr=95.21%, sys=4.49%, ctx=15, majf=0, minf=139 00:41:30.536 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:30.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.536 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.536 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:30.536 00:41:30.536 Run status group 0 (all jobs): 00:41:30.536 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10011-10012msec 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.794 00:41:30.794 real 0m11.535s 00:41:30.794 user 0m20.444s 00:41:30.794 sys 0m1.237s 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:30.794 05:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.794 ************************************ 00:41:30.794 END TEST fio_dif_1_multi_subsystems 00:41:30.794 ************************************ 00:41:30.794 05:17:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:30.794 05:17:21 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:30.794 05:17:21 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:30.794 05:17:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:30.794 ************************************ 00:41:30.794 START TEST fio_dif_rand_params 00:41:30.794 ************************************ 00:41:30.794 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:41:30.794 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:30.794 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:30.794 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:30.794 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:30.794 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:30.794 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:30.794 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:30.794 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:30.794 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:30.794 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:30.794 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:30.794 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:30.794 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:30.794 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.794 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.053 bdev_null0 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.053 [2024-10-28 05:17:21.416017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:31.053 { 00:41:31.053 "params": { 00:41:31.053 "name": "Nvme$subsystem", 00:41:31.053 "trtype": "$TEST_TRANSPORT", 00:41:31.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:31.053 "adrfam": "ipv4", 00:41:31.053 "trsvcid": "$NVMF_PORT", 00:41:31.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:31.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:31.053 "hdgst": ${hdgst:-false}, 00:41:31.053 "ddgst": ${ddgst:-false} 00:41:31.053 }, 00:41:31.053 "method": "bdev_nvme_attach_controller" 00:41:31.053 } 00:41:31.053 EOF 00:41:31.053 )") 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:31.053 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:31.054 "params": { 00:41:31.054 "name": "Nvme0", 00:41:31.054 "trtype": "tcp", 00:41:31.054 "traddr": "10.0.0.2", 00:41:31.054 "adrfam": "ipv4", 00:41:31.054 "trsvcid": "4420", 00:41:31.054 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:31.054 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:31.054 "hdgst": false, 00:41:31.054 "ddgst": false 00:41:31.054 }, 00:41:31.054 "method": "bdev_nvme_attach_controller" 00:41:31.054 }' 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:31.054 05:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:31.312 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:31.312 ... 00:41:31.312 fio-3.35 00:41:31.312 Starting 3 threads 00:41:37.993 00:41:37.993 filename0: (groupid=0, jobs=1): err= 0: pid=2545612: Mon Oct 28 05:17:27 2024 00:41:37.993 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(129MiB/5040msec) 00:41:37.993 slat (nsec): min=6659, max=56853, avg=15991.02, stdev=5421.27 00:41:37.993 clat (usec): min=4535, max=92367, avg=14588.66, stdev=12918.82 00:41:37.993 lat (usec): min=4549, max=92398, avg=14604.65, stdev=12919.30 00:41:37.993 clat percentiles (usec): 00:41:37.993 | 1.00th=[ 5080], 5.00th=[ 5538], 10.00th=[ 7308], 20.00th=[ 8586], 00:41:37.993 | 30.00th=[ 9372], 40.00th=[10421], 50.00th=[11338], 60.00th=[11863], 00:41:37.993 | 70.00th=[12518], 80.00th=[13566], 90.00th=[16450], 95.00th=[52167], 00:41:37.993 | 99.00th=[55313], 99.50th=[56361], 99.90th=[92799], 99.95th=[92799], 00:41:37.993 | 99.99th=[92799] 00:41:37.993 bw ( KiB/s): min=18944, max=35584, per=31.46%, avg=26419.20, stdev=5284.88, samples=10 00:41:37.993 iops : min= 148, max= 278, avg=206.40, stdev=41.29, samples=10 00:41:37.993 lat (msec) : 10=34.69%, 20=56.14%, 50=1.45%, 100=7.73% 00:41:37.993 cpu : usr=93.37%, sys=6.15%, ctx=14, majf=0, minf=134 00:41:37.993 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.993 issued rwts: total=1035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.993 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:37.993 filename0: (groupid=0, jobs=1): err= 0: pid=2545613: Mon Oct 28 05:17:27 2024 00:41:37.993 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(145MiB/5045msec) 00:41:37.993 slat (nsec): min=7037, max=40248, avg=15964.36, stdev=5038.56 00:41:37.993 clat (usec): min=5062, max=90256, avg=13003.06, stdev=9441.79 00:41:37.993 lat (usec): min=5075, max=90275, avg=13019.03, stdev=9441.93 00:41:37.993 clat percentiles (usec): 00:41:37.993 | 1.00th=[ 5866], 5.00th=[ 7242], 10.00th=[ 7832], 20.00th=[ 8455], 00:41:37.993 | 30.00th=[ 9110], 40.00th=[10028], 50.00th=[10945], 60.00th=[11731], 00:41:37.993 | 70.00th=[12649], 80.00th=[13698], 90.00th=[15533], 95.00th=[47973], 00:41:37.993 | 99.00th=[53216], 99.50th=[53740], 99.90th=[55837], 99.95th=[90702], 00:41:37.993 | 99.99th=[90702] 00:41:37.993 bw ( KiB/s): min=20992, max=35328, per=35.25%, avg=29593.60, stdev=4825.98, samples=10 00:41:37.993 iops : min= 164, max= 276, avg=231.20, stdev=37.70, samples=10 00:41:37.993 lat (msec) : 10=39.95%, 20=54.79%, 50=2.33%, 100=2.93% 00:41:37.993 cpu : usr=93.02%, sys=6.52%, ctx=21, majf=0, minf=114 00:41:37.993 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.993 issued rwts: total=1159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.993 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:37.993 filename0: (groupid=0, jobs=1): err= 0: pid=2545614: Mon Oct 28 05:17:27 2024 00:41:37.993 read: IOPS=221, BW=27.6MiB/s (29.0MB/s)(140MiB/5046msec) 00:41:37.993 slat (nsec): min=6707, max=43999, avg=14595.43, stdev=4343.52 00:41:37.993 clat (usec): min=4649, max=57458, avg=13504.73, stdev=10282.85 00:41:37.993 lat (usec): min=4661, max=57470, avg=13519.33, stdev=10282.59 00:41:37.993 clat percentiles (usec): 00:41:37.994 | 1.00th=[ 5800], 5.00th=[ 7308], 10.00th=[ 8029], 20.00th=[ 8717], 00:41:37.994 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[11076], 60.00th=[11994], 00:41:37.994 | 70.00th=[12911], 80.00th=[13698], 90.00th=[15533], 95.00th=[48497], 00:41:37.994 | 99.00th=[54264], 99.50th=[54789], 99.90th=[56361], 99.95th=[57410], 00:41:37.994 | 99.99th=[57410] 00:41:37.994 bw ( KiB/s): min=20736, max=34560, per=33.93%, avg=28492.80, stdev=4432.49, samples=10 00:41:37.994 iops : min= 162, max= 270, avg=222.60, stdev=34.63, samples=10 00:41:37.994 lat (msec) : 10=40.14%, 20=53.49%, 50=1.79%, 100=4.57% 00:41:37.994 cpu : usr=92.39%, sys=7.14%, ctx=13, majf=0, minf=66 00:41:37.994 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.994 issued rwts: total=1116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.994 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:37.994 00:41:37.994 Run status group 0 (all jobs): 00:41:37.994 READ: bw=82.0MiB/s (86.0MB/s), 25.7MiB/s-28.7MiB/s (26.9MB/s-30.1MB/s), io=414MiB (434MB), run=5040-5046msec 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.994 bdev_null0 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.994 [2024-10-28 05:17:27.800513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.994 bdev_null1 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.994 bdev_null2 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:37.994 { 00:41:37.994 "params": { 00:41:37.994 "name": "Nvme$subsystem", 00:41:37.994 "trtype": "$TEST_TRANSPORT", 00:41:37.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:37.994 "adrfam": "ipv4", 00:41:37.994 "trsvcid": "$NVMF_PORT", 00:41:37.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:37.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:37.994 "hdgst": ${hdgst:-false}, 00:41:37.994 "ddgst": ${ddgst:-false} 00:41:37.994 }, 00:41:37.994 "method": "bdev_nvme_attach_controller" 00:41:37.994 } 00:41:37.994 EOF 00:41:37.994 )") 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:37.994 05:17:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:37.995 { 00:41:37.995 "params": { 00:41:37.995 "name": "Nvme$subsystem", 00:41:37.995 "trtype": "$TEST_TRANSPORT", 00:41:37.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:37.995 "adrfam": "ipv4", 00:41:37.995 "trsvcid": "$NVMF_PORT", 00:41:37.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:37.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:37.995 "hdgst": ${hdgst:-false}, 00:41:37.995 "ddgst": ${ddgst:-false} 00:41:37.995 }, 00:41:37.995 "method": "bdev_nvme_attach_controller" 00:41:37.995 } 00:41:37.995 EOF 00:41:37.995 )") 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:37.995 { 00:41:37.995 "params": { 00:41:37.995 "name": "Nvme$subsystem", 00:41:37.995 "trtype": "$TEST_TRANSPORT", 00:41:37.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:37.995 "adrfam": "ipv4", 00:41:37.995 "trsvcid": "$NVMF_PORT", 00:41:37.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:37.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:37.995 "hdgst": ${hdgst:-false}, 00:41:37.995 "ddgst": ${ddgst:-false} 00:41:37.995 }, 00:41:37.995 "method": "bdev_nvme_attach_controller" 00:41:37.995 } 00:41:37.995 EOF 00:41:37.995 )") 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:37.995 "params": { 00:41:37.995 "name": "Nvme0", 00:41:37.995 "trtype": "tcp", 00:41:37.995 "traddr": "10.0.0.2", 00:41:37.995 "adrfam": "ipv4", 00:41:37.995 "trsvcid": "4420", 00:41:37.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:37.995 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:37.995 "hdgst": false, 00:41:37.995 "ddgst": false 00:41:37.995 }, 00:41:37.995 "method": "bdev_nvme_attach_controller" 00:41:37.995 },{ 00:41:37.995 "params": { 00:41:37.995 "name": "Nvme1", 00:41:37.995 "trtype": "tcp", 00:41:37.995 "traddr": "10.0.0.2", 00:41:37.995 "adrfam": "ipv4", 00:41:37.995 "trsvcid": "4420", 00:41:37.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:37.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:37.995 "hdgst": false, 00:41:37.995 "ddgst": false 00:41:37.995 }, 00:41:37.995 "method": "bdev_nvme_attach_controller" 00:41:37.995 },{ 00:41:37.995 "params": { 00:41:37.995 "name": "Nvme2", 00:41:37.995 "trtype": "tcp", 00:41:37.995 "traddr": "10.0.0.2", 00:41:37.995 "adrfam": "ipv4", 00:41:37.995 "trsvcid": "4420", 00:41:37.995 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:37.995 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:37.995 "hdgst": false, 00:41:37.995 "ddgst": false 00:41:37.995 }, 00:41:37.995 "method": "bdev_nvme_attach_controller" 00:41:37.995 }' 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:37.995 05:17:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:37.995 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:37.995 ... 00:41:37.995 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:37.995 ... 00:41:37.995 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:37.995 ... 00:41:37.995 fio-3.35 00:41:37.995 Starting 24 threads 00:41:50.203 00:41:50.203 filename0: (groupid=0, jobs=1): err= 0: pid=2546460: Mon Oct 28 05:17:39 2024 00:41:50.203 read: IOPS=457, BW=1832KiB/s (1876kB/s)(18.1MiB/10133msec) 00:41:50.203 slat (usec): min=8, max=112, avg=42.46, stdev=16.50 00:41:50.203 clat (msec): min=20, max=156, avg=34.57, stdev= 7.23 00:41:50.203 lat (msec): min=20, max=156, avg=34.61, stdev= 7.23 00:41:50.203 clat percentiles (msec): 00:41:50.203 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.203 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:41:50.203 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 36], 00:41:50.203 | 99.00th=[ 37], 99.50th=[ 37], 99.90th=[ 157], 99.95th=[ 157], 00:41:50.203 | 99.99th=[ 157] 00:41:50.203 bw ( KiB/s): min= 1792, max= 1920, per=4.22%, avg=1849.60, stdev=65.33, samples=20 00:41:50.203 iops : min= 448, max= 480, avg=462.40, stdev=16.33, samples=20 00:41:50.203 lat (msec) : 50=99.66%, 250=0.34% 00:41:50.203 cpu : usr=98.23%, sys=1.33%, ctx=19, majf=0, minf=38 00:41:50.203 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.203 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.203 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.203 filename0: (groupid=0, jobs=1): err= 0: pid=2546461: Mon Oct 28 05:17:39 2024 00:41:50.203 read: IOPS=456, BW=1824KiB/s (1868kB/s)(18.0MiB/10104msec) 00:41:50.203 slat (usec): min=11, max=146, avg=53.65, stdev=24.86 00:41:50.203 clat (msec): min=32, max=157, avg=34.55, stdev= 7.35 00:41:50.203 lat (msec): min=32, max=157, avg=34.60, stdev= 7.35 00:41:50.203 clat percentiles (msec): 00:41:50.203 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.203 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:41:50.203 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 36], 00:41:50.203 | 99.00th=[ 37], 99.50th=[ 57], 99.90th=[ 157], 99.95th=[ 157], 00:41:50.203 | 99.99th=[ 157] 00:41:50.203 bw ( KiB/s): min= 1558, max= 1920, per=4.17%, avg=1831.50, stdev=99.31, samples=20 00:41:50.203 iops : min= 389, max= 480, avg=457.85, stdev=24.90, samples=20 00:41:50.203 lat (msec) : 50=99.31%, 100=0.35%, 250=0.35% 00:41:50.203 cpu : usr=98.39%, sys=1.15%, ctx=16, majf=0, minf=49 00:41:50.203 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.203 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.203 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.203 filename0: (groupid=0, jobs=1): err= 0: pid=2546462: Mon Oct 28 05:17:39 2024 00:41:50.203 read: IOPS=464, BW=1856KiB/s (1901kB/s)(18.3MiB/10102msec) 00:41:50.203 slat (nsec): min=4145, max=61635, avg=17293.41, stdev=8799.54 00:41:50.203 clat (msec): min=6, max=107, avg=34.33, stdev= 5.10 00:41:50.203 lat (msec): min=6, max=107, avg=34.35, stdev= 5.10 00:41:50.203 clat percentiles (msec): 00:41:50.203 | 1.00th=[ 15], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.203 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:41:50.203 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 36], 95.00th=[ 36], 00:41:50.203 | 99.00th=[ 37], 99.50th=[ 37], 99.90th=[ 108], 99.95th=[ 108], 00:41:50.203 | 99.99th=[ 108] 00:41:50.203 bw ( KiB/s): min= 1792, max= 2176, per=4.26%, avg=1868.80, stdev=96.50, samples=20 00:41:50.204 iops : min= 448, max= 544, avg=467.20, stdev=24.13, samples=20 00:41:50.204 lat (msec) : 10=0.49%, 20=0.68%, 50=98.49%, 250=0.34% 00:41:50.204 cpu : usr=98.11%, sys=1.41%, ctx=27, majf=0, minf=85 00:41:50.204 IO depths : 1=6.1%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:50.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.204 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.204 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.204 filename0: (groupid=0, jobs=1): err= 0: pid=2546463: Mon Oct 28 05:17:39 2024 00:41:50.204 read: IOPS=457, BW=1831KiB/s (1875kB/s)(18.1MiB/10134msec) 00:41:50.204 slat (nsec): min=6283, max=90282, avg=39797.53, stdev=14156.32 00:41:50.204 clat (msec): min=20, max=156, avg=34.60, stdev= 7.25 00:41:50.204 lat (msec): min=20, max=156, avg=34.64, stdev= 7.25 00:41:50.204 clat percentiles (msec): 00:41:50.204 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.204 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:41:50.204 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 36], 00:41:50.204 | 99.00th=[ 37], 99.50th=[ 37], 99.90th=[ 157], 99.95th=[ 157], 00:41:50.204 | 99.99th=[ 157] 00:41:50.204 bw ( KiB/s): min= 1792, max= 1920, per=4.22%, avg=1849.60, stdev=65.33, samples=20 00:41:50.204 iops : min= 448, max= 480, avg=462.40, stdev=16.33, samples=20 00:41:50.204 lat (msec) : 50=99.66%, 250=0.34% 00:41:50.204 cpu : usr=98.38%, sys=1.21%, ctx=16, majf=0, minf=45 00:41:50.204 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.204 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.204 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.204 filename0: (groupid=0, jobs=1): err= 0: pid=2546465: Mon Oct 28 05:17:39 2024 00:41:50.204 read: IOPS=457, BW=1831KiB/s (1875kB/s)(18.1MiB/10134msec) 00:41:50.204 slat (nsec): min=13239, max=99326, avg=42099.65, stdev=13570.13 00:41:50.204 clat (msec): min=20, max=156, avg=34.57, stdev= 7.24 00:41:50.204 lat (msec): min=20, max=156, avg=34.61, stdev= 7.24 00:41:50.204 clat percentiles (msec): 00:41:50.204 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.204 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:41:50.204 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 36], 00:41:50.204 | 99.00th=[ 37], 99.50th=[ 37], 99.90th=[ 157], 99.95th=[ 157], 00:41:50.204 | 99.99th=[ 157] 00:41:50.204 bw ( KiB/s): min= 1792, max= 1920, per=4.22%, avg=1849.60, stdev=65.33, samples=20 00:41:50.204 iops : min= 448, max= 480, avg=462.40, stdev=16.33, samples=20 00:41:50.204 lat (msec) : 50=99.66%, 250=0.34% 00:41:50.204 cpu : usr=97.78%, sys=1.75%, ctx=29, majf=0, minf=35 00:41:50.204 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.204 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.204 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.204 filename0: (groupid=0, jobs=1): err= 0: pid=2546466: Mon Oct 28 05:17:39 2024 00:41:50.204 read: IOPS=457, BW=1831KiB/s (1875kB/s)(18.1MiB/10137msec) 00:41:50.204 slat (nsec): min=8683, max=96864, avg=39432.58, stdev=14590.75 00:41:50.204 clat (msec): min=20, max=159, avg=34.62, stdev= 7.21 00:41:50.204 lat (msec): min=20, max=159, avg=34.66, stdev= 7.21 00:41:50.204 clat percentiles (msec): 00:41:50.204 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.204 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:41:50.204 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 36], 00:41:50.204 | 99.00th=[ 37], 99.50th=[ 38], 99.90th=[ 157], 99.95th=[ 157], 00:41:50.204 | 99.99th=[ 161] 00:41:50.204 bw ( KiB/s): min= 1792, max= 1920, per=4.22%, avg=1849.60, stdev=65.33, samples=20 00:41:50.204 iops : min= 448, max= 480, avg=462.40, stdev=16.33, samples=20 00:41:50.204 lat (msec) : 50=99.66%, 250=0.34% 00:41:50.204 cpu : usr=91.05%, sys=4.84%, ctx=259, majf=0, minf=60 00:41:50.204 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.204 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.204 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.204 filename0: (groupid=0, jobs=1): err= 0: pid=2546467: Mon Oct 28 05:17:39 2024 00:41:50.204 read: IOPS=455, BW=1824KiB/s (1868kB/s)(18.0MiB/10106msec) 00:41:50.204 slat (nsec): min=8398, max=96809, avg=33377.92, stdev=19734.95 00:41:50.204 clat (msec): min=31, max=153, avg=34.78, stdev= 7.21 00:41:50.204 lat (msec): min=32, max=153, avg=34.81, stdev= 7.21 00:41:50.204 clat percentiles (msec): 00:41:50.204 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.204 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:41:50.204 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 36], 00:41:50.204 | 99.00th=[ 37], 99.50th=[ 63], 99.90th=[ 155], 99.95th=[ 155], 00:41:50.204 | 99.99th=[ 155] 00:41:50.204 bw ( KiB/s): min= 1667, max= 1920, per=4.17%, avg=1831.20, stdev=82.46, samples=20 00:41:50.204 iops : min= 416, max= 480, avg=457.75, stdev=20.72, samples=20 00:41:50.204 lat (msec) : 50=99.31%, 100=0.35%, 250=0.35% 00:41:50.204 cpu : usr=96.29%, sys=2.10%, ctx=121, majf=0, minf=30 00:41:50.204 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.204 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.204 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.204 filename0: (groupid=0, jobs=1): err= 0: pid=2546468: Mon Oct 28 05:17:39 2024 00:41:50.204 read: IOPS=455, BW=1824KiB/s (1868kB/s)(18.0MiB/10106msec) 00:41:50.204 slat (usec): min=8, max=589, avg=36.65, stdev=23.85 00:41:50.204 clat (msec): min=32, max=153, avg=34.75, stdev= 7.20 00:41:50.204 lat (msec): min=32, max=153, avg=34.78, stdev= 7.20 00:41:50.204 clat percentiles (msec): 00:41:50.204 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.204 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:41:50.204 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 36], 00:41:50.204 | 99.00th=[ 37], 99.50th=[ 63], 99.90th=[ 153], 99.95th=[ 153], 00:41:50.204 | 99.99th=[ 155] 00:41:50.204 bw ( KiB/s): min= 1667, max= 1920, per=4.17%, avg=1831.20, stdev=82.46, samples=20 00:41:50.204 iops : min= 416, max= 480, avg=457.75, stdev=20.72, samples=20 00:41:50.204 lat (msec) : 50=99.31%, 100=0.35%, 250=0.35% 00:41:50.204 cpu : usr=93.14%, sys=3.96%, ctx=281, majf=0, minf=37 00:41:50.204 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.204 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.204 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.204 filename1: (groupid=0, jobs=1): err= 0: pid=2546469: Mon Oct 28 05:17:39 2024 00:41:50.204 read: IOPS=456, BW=1826KiB/s (1870kB/s)(17.9MiB/10044msec) 00:41:50.204 slat (usec): min=8, max=115, avg=36.52, stdev=23.33 00:41:50.204 clat (msec): min=14, max=108, avg=34.81, stdev= 5.57 00:41:50.204 lat (msec): min=14, max=108, avg=34.85, stdev= 5.57 00:41:50.204 clat percentiles (msec): 00:41:50.204 | 1.00th=[ 27], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.204 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:41:50.204 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 36], 95.00th=[ 40], 00:41:50.204 | 99.00th=[ 52], 99.50th=[ 63], 99.90th=[ 109], 99.95th=[ 109], 00:41:50.204 | 99.99th=[ 109] 00:41:50.204 bw ( KiB/s): min= 1558, max= 1936, per=4.15%, avg=1822.85, stdev=85.08, samples=20 00:41:50.204 iops : min= 389, max= 484, avg=455.65, stdev=21.43, samples=20 00:41:50.204 lat (msec) : 20=0.24%, 50=98.58%, 100=0.83%, 250=0.35% 00:41:50.204 cpu : usr=98.12%, sys=1.40%, ctx=19, majf=0, minf=55 00:41:50.204 IO depths : 1=0.4%, 2=3.3%, 4=12.3%, 8=69.2%, 16=14.8%, 32=0.0%, >=64=0.0% 00:41:50.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.204 complete : 0=0.0%, 4=91.6%, 8=5.3%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.204 issued rwts: total=4586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.204 filename1: (groupid=0, jobs=1): err= 0: pid=2546471: Mon Oct 28 05:17:39 2024 00:41:50.204 read: IOPS=457, BW=1832KiB/s (1876kB/s)(18.1MiB/10133msec) 00:41:50.204 slat (usec): min=10, max=111, avg=43.10, stdev=15.62 00:41:50.204 clat (msec): min=20, max=156, avg=34.58, stdev= 7.24 00:41:50.204 lat (msec): min=20, max=156, avg=34.62, stdev= 7.24 00:41:50.205 clat percentiles (msec): 00:41:50.205 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.205 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:41:50.205 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 36], 00:41:50.205 | 99.00th=[ 37], 99.50th=[ 37], 99.90th=[ 157], 99.95th=[ 157], 00:41:50.205 | 99.99th=[ 157] 00:41:50.205 bw ( KiB/s): min= 1792, max= 1920, per=4.22%, avg=1849.60, stdev=65.33, samples=20 00:41:50.205 iops : min= 448, max= 480, avg=462.40, stdev=16.33, samples=20 00:41:50.205 lat (msec) : 50=99.66%, 250=0.34% 00:41:50.205 cpu : usr=97.52%, sys=2.00%, ctx=56, majf=0, minf=43 00:41:50.205 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.205 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.205 filename1: (groupid=0, jobs=1): err= 0: pid=2546472: Mon Oct 28 05:17:39 2024 00:41:50.205 read: IOPS=457, BW=1832KiB/s (1876kB/s)(18.1MiB/10133msec) 00:41:50.205 slat (usec): min=8, max=141, avg=28.19, stdev=22.37 00:41:50.205 clat (msec): min=20, max=156, avg=34.71, stdev= 7.21 00:41:50.205 lat (msec): min=20, max=156, avg=34.74, stdev= 7.21 00:41:50.205 clat percentiles (msec): 00:41:50.205 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.205 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:41:50.205 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 36], 95.00th=[ 36], 00:41:50.205 | 99.00th=[ 37], 99.50th=[ 37], 99.90th=[ 157], 99.95th=[ 157], 00:41:50.205 | 99.99th=[ 157] 00:41:50.205 bw ( KiB/s): min= 1792, max= 1920, per=4.22%, avg=1849.60, stdev=65.33, samples=20 00:41:50.205 iops : min= 448, max= 480, avg=462.40, stdev=16.33, samples=20 00:41:50.205 lat (msec) : 50=99.66%, 250=0.34% 00:41:50.205 cpu : usr=96.58%, sys=2.62%, ctx=119, majf=0, minf=54 00:41:50.205 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.205 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.205 filename1: (groupid=0, jobs=1): err= 0: pid=2546473: Mon Oct 28 05:17:39 2024 00:41:50.205 read: IOPS=457, BW=1829KiB/s (1873kB/s)(17.9MiB/10043msec) 00:41:50.205 slat (nsec): min=7974, max=98445, avg=25348.97, stdev=17134.82 00:41:50.205 clat (msec): min=32, max=108, avg=34.75, stdev= 5.15 00:41:50.205 lat (msec): min=32, max=108, avg=34.78, stdev= 5.15 00:41:50.205 clat percentiles (msec): 00:41:50.205 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.205 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:41:50.205 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 36], 95.00th=[ 36], 00:41:50.205 | 99.00th=[ 37], 99.50th=[ 81], 99.90th=[ 109], 99.95th=[ 109], 00:41:50.205 | 99.99th=[ 109] 00:41:50.205 bw ( KiB/s): min= 1563, max= 1920, per=4.16%, avg=1825.35, stdev=96.70, samples=20 00:41:50.205 iops : min= 390, max= 480, avg=456.30, stdev=24.28, samples=20 00:41:50.205 lat (msec) : 50=99.30%, 100=0.35%, 250=0.35% 00:41:50.205 cpu : usr=98.43%, sys=1.17%, ctx=15, majf=0, minf=55 00:41:50.205 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.205 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.205 filename1: (groupid=0, jobs=1): err= 0: pid=2546474: Mon Oct 28 05:17:39 2024 00:41:50.205 read: IOPS=455, BW=1824KiB/s (1867kB/s)(18.0MiB/10107msec) 00:41:50.205 slat (nsec): min=8156, max=61779, avg=24927.74, stdev=11170.34 00:41:50.205 clat (msec): min=18, max=154, avg=34.87, stdev= 7.25 00:41:50.205 lat (msec): min=18, max=154, avg=34.89, stdev= 7.25 00:41:50.205 clat percentiles (msec): 00:41:50.205 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.205 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:41:50.205 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 36], 95.00th=[ 36], 00:41:50.205 | 99.00th=[ 37], 99.50th=[ 63], 99.90th=[ 153], 99.95th=[ 153], 00:41:50.205 | 99.99th=[ 155] 00:41:50.205 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1831.40, stdev=82.10, samples=20 00:41:50.205 iops : min= 416, max= 480, avg=457.85, stdev=20.53, samples=20 00:41:50.205 lat (msec) : 20=0.09%, 50=99.18%, 100=0.39%, 250=0.35% 00:41:50.205 cpu : usr=97.93%, sys=1.46%, ctx=140, majf=0, minf=48 00:41:50.205 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:50.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.205 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.205 filename1: (groupid=0, jobs=1): err= 0: pid=2546476: Mon Oct 28 05:17:39 2024 00:41:50.205 read: IOPS=457, BW=1831KiB/s (1875kB/s)(18.1MiB/10134msec) 00:41:50.205 slat (usec): min=8, max=134, avg=44.84, stdev=16.67 00:41:50.205 clat (msec): min=20, max=156, avg=34.54, stdev= 7.32 00:41:50.205 lat (msec): min=20, max=157, avg=34.58, stdev= 7.32 00:41:50.205 clat percentiles (msec): 00:41:50.205 | 1.00th=[ 29], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.205 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:41:50.205 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 36], 00:41:50.205 | 99.00th=[ 41], 99.50th=[ 42], 99.90th=[ 157], 99.95th=[ 157], 00:41:50.205 | 99.99th=[ 157] 00:41:50.205 bw ( KiB/s): min= 1792, max= 1920, per=4.22%, avg=1849.60, stdev=65.33, samples=20 00:41:50.205 iops : min= 448, max= 480, avg=462.40, stdev=16.33, samples=20 00:41:50.205 lat (msec) : 50=99.66%, 250=0.34% 00:41:50.205 cpu : usr=98.00%, sys=1.46%, ctx=51, majf=0, minf=36 00:41:50.205 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:41:50.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.205 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.205 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.205 filename1: (groupid=0, jobs=1): err= 0: pid=2546477: Mon Oct 28 05:17:39 2024 00:41:50.205 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10007msec) 00:41:50.205 slat (nsec): min=7886, max=79684, avg=22405.02, stdev=14494.16 00:41:50.205 clat (usec): min=7875, max=51395, avg=34079.10, stdev=2753.91 00:41:50.205 lat (usec): min=7898, max=51455, avg=34101.50, stdev=2752.02 00:41:50.205 clat percentiles (usec): 00:41:50.205 | 1.00th=[10421], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:41:50.205 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:41:50.205 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[35914], 00:41:50.205 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:41:50.205 | 99.99th=[51643] 00:41:50.205 bw ( KiB/s): min= 1792, max= 2176, per=4.24%, avg=1862.40, stdev=97.17, samples=20 00:41:50.205 iops : min= 448, max= 544, avg=465.60, stdev=24.29, samples=20 00:41:50.205 lat (msec) : 10=0.68%, 20=0.39%, 50=98.89%, 100=0.04% 00:41:50.205 cpu : usr=97.59%, sys=1.59%, ctx=58, majf=0, minf=45 00:41:50.205 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.205 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.205 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.205 filename1: (groupid=0, jobs=1): err= 0: pid=2546479: Mon Oct 28 05:17:39 2024 00:41:50.205 read: IOPS=476, BW=1908KiB/s (1954kB/s)(18.8MiB/10106msec) 00:41:50.205 slat (usec): min=7, max=150, avg=24.73, stdev=15.69 00:41:50.205 clat (msec): min=12, max=153, avg=33.41, stdev= 8.71 00:41:50.205 lat (msec): min=12, max=153, avg=33.43, stdev= 8.71 00:41:50.205 clat percentiles (msec): 00:41:50.205 | 1.00th=[ 18], 5.00th=[ 22], 10.00th=[ 26], 20.00th=[ 34], 00:41:50.205 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:41:50.206 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 36], 95.00th=[ 36], 00:41:50.206 | 99.00th=[ 57], 99.50th=[ 63], 99.90th=[ 155], 99.95th=[ 155], 00:41:50.206 | 99.99th=[ 155] 00:41:50.206 bw ( KiB/s): min= 1677, max= 2224, per=4.37%, avg=1916.00, stdev=155.84, samples=20 00:41:50.206 iops : min= 419, max= 556, avg=478.95, stdev=39.03, samples=20 00:41:50.206 lat (msec) : 20=1.95%, 50=96.39%, 100=1.33%, 250=0.33% 00:41:50.206 cpu : usr=94.50%, sys=2.76%, ctx=508, majf=0, minf=81 00:41:50.206 IO depths : 1=1.0%, 2=3.0%, 4=9.2%, 8=72.2%, 16=14.7%, 32=0.0%, >=64=0.0% 00:41:50.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.206 complete : 0=0.0%, 4=90.8%, 8=6.5%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.206 issued rwts: total=4820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.206 filename2: (groupid=0, jobs=1): err= 0: pid=2546480: Mon Oct 28 05:17:39 2024 00:41:50.206 read: IOPS=455, BW=1824KiB/s (1867kB/s)(18.0MiB/10107msec) 00:41:50.206 slat (usec): min=11, max=109, avg=42.83, stdev=14.65 00:41:50.206 clat (msec): min=32, max=159, avg=34.68, stdev= 7.37 00:41:50.206 lat (msec): min=32, max=159, avg=34.73, stdev= 7.37 00:41:50.206 clat percentiles (msec): 00:41:50.206 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.206 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:41:50.206 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 36], 00:41:50.206 | 99.00th=[ 37], 99.50th=[ 56], 99.90th=[ 157], 99.95th=[ 157], 00:41:50.206 | 99.99th=[ 161] 00:41:50.206 bw ( KiB/s): min= 1558, max= 1920, per=4.17%, avg=1831.65, stdev=99.05, samples=20 00:41:50.206 iops : min= 389, max= 480, avg=457.85, stdev=24.90, samples=20 00:41:50.206 lat (msec) : 50=99.31%, 100=0.35%, 250=0.35% 00:41:50.206 cpu : usr=98.21%, sys=1.32%, ctx=38, majf=0, minf=28 00:41:50.206 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.206 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.206 filename2: (groupid=0, jobs=1): err= 0: pid=2546481: Mon Oct 28 05:17:39 2024 00:41:50.206 read: IOPS=455, BW=1824KiB/s (1867kB/s)(18.0MiB/10107msec) 00:41:50.206 slat (usec): min=8, max=105, avg=34.98, stdev=22.46 00:41:50.206 clat (msec): min=32, max=154, avg=34.76, stdev= 7.21 00:41:50.206 lat (msec): min=32, max=154, avg=34.79, stdev= 7.21 00:41:50.206 clat percentiles (msec): 00:41:50.206 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.206 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:41:50.206 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 36], 00:41:50.206 | 99.00th=[ 37], 99.50th=[ 63], 99.90th=[ 155], 99.95th=[ 155], 00:41:50.206 | 99.99th=[ 155] 00:41:50.206 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1831.40, stdev=82.10, samples=20 00:41:50.206 iops : min= 416, max= 480, avg=457.85, stdev=20.53, samples=20 00:41:50.206 lat (msec) : 50=99.31%, 100=0.35%, 250=0.35% 00:41:50.206 cpu : usr=98.27%, sys=1.29%, ctx=25, majf=0, minf=44 00:41:50.206 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.206 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.206 filename2: (groupid=0, jobs=1): err= 0: pid=2546482: Mon Oct 28 05:17:39 2024 00:41:50.206 read: IOPS=457, BW=1829KiB/s (1873kB/s)(17.9MiB/10043msec) 00:41:50.206 slat (usec): min=7, max=102, avg=33.27, stdev=23.54 00:41:50.206 clat (msec): min=32, max=108, avg=34.68, stdev= 5.16 00:41:50.206 lat (msec): min=32, max=108, avg=34.71, stdev= 5.16 00:41:50.206 clat percentiles (msec): 00:41:50.206 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.206 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:41:50.206 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 36], 00:41:50.206 | 99.00th=[ 37], 99.50th=[ 81], 99.90th=[ 109], 99.95th=[ 109], 00:41:50.206 | 99.99th=[ 109] 00:41:50.206 bw ( KiB/s): min= 1563, max= 1920, per=4.16%, avg=1825.35, stdev=96.70, samples=20 00:41:50.206 iops : min= 390, max= 480, avg=456.30, stdev=24.28, samples=20 00:41:50.206 lat (msec) : 50=99.30%, 100=0.35%, 250=0.35% 00:41:50.206 cpu : usr=98.12%, sys=1.46%, ctx=14, majf=0, minf=31 00:41:50.206 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.206 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.206 filename2: (groupid=0, jobs=1): err= 0: pid=2546483: Mon Oct 28 05:17:39 2024 00:41:50.206 read: IOPS=458, BW=1833KiB/s (1877kB/s)(18.0MiB/10054msec) 00:41:50.206 slat (nsec): min=8059, max=94068, avg=27166.83, stdev=18245.00 00:41:50.206 clat (msec): min=32, max=108, avg=34.65, stdev= 4.52 00:41:50.206 lat (msec): min=32, max=108, avg=34.68, stdev= 4.52 00:41:50.206 clat percentiles (msec): 00:41:50.206 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.206 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:41:50.206 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 36], 95.00th=[ 36], 00:41:50.206 | 99.00th=[ 37], 99.50th=[ 54], 99.90th=[ 109], 99.95th=[ 109], 00:41:50.206 | 99.99th=[ 109] 00:41:50.206 bw ( KiB/s): min= 1532, max= 1920, per=4.17%, avg=1830.20, stdev=103.17, samples=20 00:41:50.206 iops : min= 383, max= 480, avg=457.55, stdev=25.79, samples=20 00:41:50.206 lat (msec) : 50=99.31%, 100=0.35%, 250=0.35% 00:41:50.206 cpu : usr=98.28%, sys=1.30%, ctx=18, majf=0, minf=39 00:41:50.206 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.206 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.206 filename2: (groupid=0, jobs=1): err= 0: pid=2546484: Mon Oct 28 05:17:39 2024 00:41:50.206 read: IOPS=457, BW=1831KiB/s (1875kB/s)(18.1MiB/10134msec) 00:41:50.206 slat (usec): min=14, max=109, avg=43.61, stdev=14.40 00:41:50.206 clat (msec): min=20, max=156, avg=34.55, stdev= 7.24 00:41:50.206 lat (msec): min=20, max=156, avg=34.60, stdev= 7.24 00:41:50.206 clat percentiles (msec): 00:41:50.206 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.206 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:41:50.206 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 36], 00:41:50.206 | 99.00th=[ 37], 99.50th=[ 37], 99.90th=[ 157], 99.95th=[ 157], 00:41:50.206 | 99.99th=[ 157] 00:41:50.206 bw ( KiB/s): min= 1792, max= 1920, per=4.22%, avg=1849.60, stdev=65.33, samples=20 00:41:50.206 iops : min= 448, max= 480, avg=462.40, stdev=16.33, samples=20 00:41:50.206 lat (msec) : 50=99.66%, 250=0.34% 00:41:50.206 cpu : usr=93.78%, sys=3.45%, ctx=299, majf=0, minf=45 00:41:50.206 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.206 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.206 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.206 filename2: (groupid=0, jobs=1): err= 0: pid=2546485: Mon Oct 28 05:17:39 2024 00:41:50.206 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10007msec) 00:41:50.206 slat (usec): min=6, max=180, avg=45.14, stdev=25.15 00:41:50.206 clat (usec): min=7263, max=43103, avg=33872.64, stdev=2708.27 00:41:50.206 lat (usec): min=7277, max=43270, avg=33917.78, stdev=2709.11 00:41:50.206 clat percentiles (usec): 00:41:50.206 | 1.00th=[10421], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:41:50.206 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:41:50.206 | 70.00th=[34341], 80.00th=[34866], 90.00th=[34866], 95.00th=[35390], 00:41:50.206 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:41:50.206 | 99.99th=[43254] 00:41:50.206 bw ( KiB/s): min= 1792, max= 2176, per=4.24%, avg=1862.40, stdev=97.17, samples=20 00:41:50.206 iops : min= 448, max= 544, avg=465.60, stdev=24.29, samples=20 00:41:50.206 lat (msec) : 10=0.68%, 20=0.34%, 50=98.97% 00:41:50.206 cpu : usr=95.02%, sys=2.96%, ctx=427, majf=0, minf=44 00:41:50.207 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.207 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.207 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.207 filename2: (groupid=0, jobs=1): err= 0: pid=2546486: Mon Oct 28 05:17:39 2024 00:41:50.207 read: IOPS=460, BW=1841KiB/s (1885kB/s)(18.2MiB/10150msec) 00:41:50.207 slat (usec): min=5, max=139, avg=38.60, stdev=22.33 00:41:50.207 clat (msec): min=6, max=156, avg=34.38, stdev= 7.52 00:41:50.207 lat (msec): min=6, max=156, avg=34.42, stdev= 7.52 00:41:50.207 clat percentiles (msec): 00:41:50.207 | 1.00th=[ 22], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.207 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:41:50.207 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 36], 00:41:50.207 | 99.00th=[ 37], 99.50th=[ 37], 99.90th=[ 157], 99.95th=[ 157], 00:41:50.207 | 99.99th=[ 157] 00:41:50.207 bw ( KiB/s): min= 1792, max= 2048, per=4.24%, avg=1862.40, stdev=77.42, samples=20 00:41:50.207 iops : min= 448, max= 512, avg=465.60, stdev=19.35, samples=20 00:41:50.207 lat (msec) : 10=0.34%, 20=0.34%, 50=98.97%, 250=0.34% 00:41:50.207 cpu : usr=97.61%, sys=1.63%, ctx=35, majf=0, minf=48 00:41:50.207 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.207 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.207 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.207 filename2: (groupid=0, jobs=1): err= 0: pid=2546487: Mon Oct 28 05:17:39 2024 00:41:50.207 read: IOPS=457, BW=1831KiB/s (1875kB/s)(18.1MiB/10134msec) 00:41:50.207 slat (usec): min=13, max=123, avg=41.78, stdev=14.84 00:41:50.207 clat (msec): min=20, max=157, avg=34.54, stdev= 7.26 00:41:50.207 lat (msec): min=20, max=157, avg=34.59, stdev= 7.26 00:41:50.207 clat percentiles (msec): 00:41:50.207 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.207 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:41:50.207 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 36], 00:41:50.207 | 99.00th=[ 37], 99.50th=[ 37], 99.90th=[ 157], 99.95th=[ 157], 00:41:50.207 | 99.99th=[ 157] 00:41:50.207 bw ( KiB/s): min= 1792, max= 1920, per=4.22%, avg=1849.60, stdev=65.33, samples=20 00:41:50.207 iops : min= 448, max= 480, avg=462.40, stdev=16.33, samples=20 00:41:50.207 lat (msec) : 50=99.66%, 250=0.34% 00:41:50.207 cpu : usr=98.06%, sys=1.51%, ctx=10, majf=0, minf=34 00:41:50.207 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.207 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.207 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.207 00:41:50.207 Run status group 0 (all jobs): 00:41:50.207 READ: bw=42.8MiB/s (44.9MB/s), 1824KiB/s-1908KiB/s (1867kB/s-1954kB/s), io=435MiB (456MB), run=10007-10150msec 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.207 bdev_null0 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:50.207 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.207 [2024-10-28 05:17:39.868778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.208 bdev_null1 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:50.208 { 00:41:50.208 "params": { 00:41:50.208 "name": "Nvme$subsystem", 00:41:50.208 "trtype": "$TEST_TRANSPORT", 00:41:50.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:50.208 "adrfam": "ipv4", 00:41:50.208 "trsvcid": "$NVMF_PORT", 00:41:50.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:50.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:50.208 "hdgst": ${hdgst:-false}, 00:41:50.208 "ddgst": ${ddgst:-false} 00:41:50.208 }, 00:41:50.208 "method": "bdev_nvme_attach_controller" 00:41:50.208 } 00:41:50.208 EOF 00:41:50.208 )") 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:50.208 { 00:41:50.208 "params": { 00:41:50.208 "name": "Nvme$subsystem", 00:41:50.208 "trtype": "$TEST_TRANSPORT", 00:41:50.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:50.208 "adrfam": "ipv4", 00:41:50.208 "trsvcid": "$NVMF_PORT", 00:41:50.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:50.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:50.208 "hdgst": ${hdgst:-false}, 00:41:50.208 "ddgst": ${ddgst:-false} 00:41:50.208 }, 00:41:50.208 "method": "bdev_nvme_attach_controller" 00:41:50.208 } 00:41:50.208 EOF 00:41:50.208 )") 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:50.208 "params": { 00:41:50.208 "name": "Nvme0", 00:41:50.208 "trtype": "tcp", 00:41:50.208 "traddr": "10.0.0.2", 00:41:50.208 "adrfam": "ipv4", 00:41:50.208 "trsvcid": "4420", 00:41:50.208 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:50.208 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:50.208 "hdgst": false, 00:41:50.208 "ddgst": false 00:41:50.208 }, 00:41:50.208 "method": "bdev_nvme_attach_controller" 00:41:50.208 },{ 00:41:50.208 "params": { 00:41:50.208 "name": "Nvme1", 00:41:50.208 "trtype": "tcp", 00:41:50.208 "traddr": "10.0.0.2", 00:41:50.208 "adrfam": "ipv4", 00:41:50.208 "trsvcid": "4420", 00:41:50.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:50.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:50.208 "hdgst": false, 00:41:50.208 "ddgst": false 00:41:50.208 }, 00:41:50.208 "method": "bdev_nvme_attach_controller" 00:41:50.208 }' 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:50.208 05:17:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:50.208 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:50.208 ... 00:41:50.208 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:50.208 ... 00:41:50.208 fio-3.35 00:41:50.208 Starting 4 threads 00:41:56.766 00:41:56.766 filename0: (groupid=0, jobs=1): err= 0: pid=2547829: Mon Oct 28 05:17:46 2024 00:41:56.766 read: IOPS=1863, BW=14.6MiB/s (15.3MB/s)(72.8MiB/5002msec) 00:41:56.766 slat (nsec): min=5262, max=66968, avg=12568.21, stdev=6573.04 00:41:56.766 clat (usec): min=1497, max=7739, avg=4255.19, stdev=737.06 00:41:56.766 lat (usec): min=1507, max=7750, avg=4267.76, stdev=736.68 00:41:56.766 clat percentiles (usec): 00:41:56.766 | 1.00th=[ 2900], 5.00th=[ 3392], 10.00th=[ 3589], 20.00th=[ 3752], 00:41:56.766 | 30.00th=[ 3884], 40.00th=[ 4015], 50.00th=[ 4113], 60.00th=[ 4228], 00:41:56.766 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 5473], 95.00th=[ 5866], 00:41:56.766 | 99.00th=[ 6587], 99.50th=[ 6849], 99.90th=[ 7111], 99.95th=[ 7373], 00:41:56.766 | 99.99th=[ 7767] 00:41:56.766 bw ( KiB/s): min=14080, max=15584, per=24.97%, avg=14901.33, stdev=560.80, samples=9 00:41:56.766 iops : min= 1760, max= 1948, avg=1862.67, stdev=70.10, samples=9 00:41:56.766 lat (msec) : 2=0.21%, 4=37.61%, 10=62.17% 00:41:56.766 cpu : usr=93.76%, sys=5.36%, ctx=68, majf=0, minf=37 00:41:56.766 IO depths : 1=0.1%, 2=3.6%, 4=69.2%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:56.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.766 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.766 issued rwts: total=9319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.766 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:56.766 filename0: (groupid=0, jobs=1): err= 0: pid=2547830: Mon Oct 28 05:17:46 2024 00:41:56.766 read: IOPS=1861, BW=14.5MiB/s (15.2MB/s)(72.7MiB/5002msec) 00:41:56.766 slat (nsec): min=5434, max=81448, avg=14107.92, stdev=8004.65 00:41:56.766 clat (usec): min=737, max=9838, avg=4251.50, stdev=701.04 00:41:56.766 lat (usec): min=751, max=9854, avg=4265.60, stdev=700.36 00:41:56.766 clat percentiles (usec): 00:41:56.766 | 1.00th=[ 2868], 5.00th=[ 3458], 10.00th=[ 3621], 20.00th=[ 3818], 00:41:56.766 | 30.00th=[ 3916], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4228], 00:41:56.766 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 5276], 95.00th=[ 5800], 00:41:56.766 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 7242], 99.95th=[ 7504], 00:41:56.766 | 99.99th=[ 9896] 00:41:56.766 bw ( KiB/s): min=13851, max=15456, per=24.88%, avg=14849.22, stdev=492.17, samples=9 00:41:56.766 iops : min= 1731, max= 1932, avg=1856.11, stdev=61.62, samples=9 00:41:56.766 lat (usec) : 750=0.01%, 1000=0.02% 00:41:56.766 lat (msec) : 2=0.17%, 4=37.37%, 10=62.42% 00:41:56.766 cpu : usr=94.22%, sys=5.18%, ctx=12, majf=0, minf=75 00:41:56.766 IO depths : 1=0.1%, 2=6.0%, 4=66.8%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:56.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.766 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.766 issued rwts: total=9309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.766 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:56.766 filename1: (groupid=0, jobs=1): err= 0: pid=2547831: Mon Oct 28 05:17:46 2024 00:41:56.766 read: IOPS=1884, BW=14.7MiB/s (15.4MB/s)(73.6MiB/5001msec) 00:41:56.766 slat (nsec): min=5259, max=69965, avg=14204.35, stdev=7898.04 00:41:56.766 clat (usec): min=842, max=9413, avg=4199.30, stdev=705.62 00:41:56.766 lat (usec): min=856, max=9427, avg=4213.50, stdev=705.35 00:41:56.766 clat percentiles (usec): 00:41:56.766 | 1.00th=[ 2900], 5.00th=[ 3294], 10.00th=[ 3523], 20.00th=[ 3720], 00:41:56.766 | 30.00th=[ 3851], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4178], 00:41:56.766 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 5276], 95.00th=[ 5735], 00:41:56.766 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 7046], 99.95th=[ 7177], 00:41:56.766 | 99.99th=[ 9372] 00:41:56.766 bw ( KiB/s): min=13851, max=15712, per=25.26%, avg=15075.00, stdev=548.14, samples=9 00:41:56.766 iops : min= 1731, max= 1964, avg=1884.33, stdev=68.62, samples=9 00:41:56.766 lat (usec) : 1000=0.02% 00:41:56.766 lat (msec) : 2=0.12%, 4=40.47%, 10=59.39% 00:41:56.766 cpu : usr=94.44%, sys=5.00%, ctx=9, majf=0, minf=33 00:41:56.766 IO depths : 1=0.1%, 2=5.9%, 4=66.7%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:56.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.766 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.767 issued rwts: total=9422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.767 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:56.767 filename1: (groupid=0, jobs=1): err= 0: pid=2547832: Mon Oct 28 05:17:46 2024 00:41:56.767 read: IOPS=1852, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5003msec) 00:41:56.767 slat (nsec): min=4002, max=66201, avg=15428.22, stdev=7710.36 00:41:56.767 clat (usec): min=952, max=8774, avg=4269.37, stdev=708.68 00:41:56.767 lat (usec): min=965, max=8789, avg=4284.80, stdev=707.68 00:41:56.767 clat percentiles (usec): 00:41:56.767 | 1.00th=[ 2999], 5.00th=[ 3490], 10.00th=[ 3654], 20.00th=[ 3818], 00:41:56.767 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4113], 60.00th=[ 4228], 00:41:56.767 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 5342], 95.00th=[ 5800], 00:41:56.767 | 99.00th=[ 6587], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 7570], 00:41:56.767 | 99.99th=[ 8717] 00:41:56.767 bw ( KiB/s): min=13824, max=15504, per=24.81%, avg=14807.11, stdev=529.40, samples=9 00:41:56.767 iops : min= 1728, max= 1938, avg=1850.89, stdev=66.17, samples=9 00:41:56.767 lat (usec) : 1000=0.02% 00:41:56.767 lat (msec) : 2=0.09%, 4=38.58%, 10=61.31% 00:41:56.767 cpu : usr=94.80%, sys=4.60%, ctx=18, majf=0, minf=49 00:41:56.767 IO depths : 1=0.1%, 2=4.3%, 4=68.5%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:56.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.767 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.767 issued rwts: total=9268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.767 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:56.767 00:41:56.767 Run status group 0 (all jobs): 00:41:56.767 READ: bw=58.3MiB/s (61.1MB/s), 14.5MiB/s-14.7MiB/s (15.2MB/s-15.4MB/s), io=292MiB (306MB), run=5001-5003msec 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:56.767 00:41:56.767 real 0m25.011s 00:41:56.767 user 4m33.184s 00:41:56.767 sys 0m7.768s 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:56.767 05:17:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:56.767 ************************************ 00:41:56.767 END TEST fio_dif_rand_params 00:41:56.767 ************************************ 00:41:56.767 05:17:46 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:56.767 05:17:46 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:56.767 05:17:46 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:56.767 05:17:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:56.767 ************************************ 00:41:56.767 START TEST fio_dif_digest 00:41:56.767 ************************************ 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:56.767 bdev_null0 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:56.767 [2024-10-28 05:17:46.482413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:56.767 { 00:41:56.767 "params": { 00:41:56.767 "name": "Nvme$subsystem", 00:41:56.767 "trtype": "$TEST_TRANSPORT", 00:41:56.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:56.767 "adrfam": "ipv4", 00:41:56.767 "trsvcid": "$NVMF_PORT", 00:41:56.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:56.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:56.767 "hdgst": ${hdgst:-false}, 00:41:56.767 "ddgst": ${ddgst:-false} 00:41:56.767 }, 00:41:56.767 "method": "bdev_nvme_attach_controller" 00:41:56.767 } 00:41:56.767 EOF 00:41:56.767 )") 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:41:56.767 05:17:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:56.767 "params": { 00:41:56.767 "name": "Nvme0", 00:41:56.767 "trtype": "tcp", 00:41:56.767 "traddr": "10.0.0.2", 00:41:56.768 "adrfam": "ipv4", 00:41:56.768 "trsvcid": "4420", 00:41:56.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:56.768 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:56.768 "hdgst": true, 00:41:56.768 "ddgst": true 00:41:56.768 }, 00:41:56.768 "method": "bdev_nvme_attach_controller" 00:41:56.768 }' 00:41:56.768 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:56.768 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:56.768 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:56.768 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:56.768 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:56.768 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:56.768 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:56.768 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:56.768 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:56.768 05:17:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:56.768 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:56.768 ... 00:41:56.768 fio-3.35 00:41:56.768 Starting 3 threads 00:42:08.967 00:42:08.967 filename0: (groupid=0, jobs=1): err= 0: pid=2548646: Mon Oct 28 05:17:57 2024 00:42:08.967 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(257MiB/10048msec) 00:42:08.967 slat (nsec): min=5830, max=58875, avg=13591.74, stdev=3264.53 00:42:08.967 clat (usec): min=8966, max=53012, avg=14621.98, stdev=1699.68 00:42:08.967 lat (usec): min=8979, max=53025, avg=14635.57, stdev=1699.70 00:42:08.967 clat percentiles (usec): 00:42:08.967 | 1.00th=[10421], 5.00th=[12518], 10.00th=[13173], 20.00th=[13698], 00:42:08.967 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14746], 60.00th=[15008], 00:42:08.967 | 70.00th=[15270], 80.00th=[15533], 90.00th=[16057], 95.00th=[16450], 00:42:08.967 | 99.00th=[17171], 99.50th=[17695], 99.90th=[21103], 99.95th=[49021], 00:42:08.967 | 99.99th=[53216] 00:42:08.967 bw ( KiB/s): min=25088, max=28416, per=34.03%, avg=26291.20, stdev=958.22, samples=20 00:42:08.967 iops : min= 196, max= 222, avg=205.40, stdev= 7.49, samples=20 00:42:08.967 lat (msec) : 10=0.78%, 20=99.08%, 50=0.10%, 100=0.05% 00:42:08.967 cpu : usr=90.79%, sys=8.70%, ctx=28, majf=0, minf=171 00:42:08.967 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:08.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:08.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:08.967 issued rwts: total=2056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:08.967 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:08.967 filename0: (groupid=0, jobs=1): err= 0: pid=2548647: Mon Oct 28 05:17:57 2024 00:42:08.967 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(253MiB/10046msec) 00:42:08.967 slat (nsec): min=7042, max=37521, avg=13461.74, stdev=2950.04 00:42:08.967 clat (usec): min=7919, max=55347, avg=14866.37, stdev=1704.70 00:42:08.967 lat (usec): min=7927, max=55361, avg=14879.83, stdev=1704.80 00:42:08.967 clat percentiles (usec): 00:42:08.967 | 1.00th=[10159], 5.00th=[12911], 10.00th=[13435], 20.00th=[13960], 00:42:08.967 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:42:08.967 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16319], 95.00th=[16712], 00:42:08.967 | 99.00th=[17433], 99.50th=[17695], 99.90th=[19006], 99.95th=[46400], 00:42:08.967 | 99.99th=[55313] 00:42:08.967 bw ( KiB/s): min=24320, max=27648, per=33.47%, avg=25856.00, stdev=1082.94, samples=20 00:42:08.967 iops : min= 190, max= 216, avg=202.00, stdev= 8.46, samples=20 00:42:08.967 lat (msec) : 10=0.94%, 20=98.96%, 50=0.05%, 100=0.05% 00:42:08.967 cpu : usr=90.33%, sys=9.18%, ctx=29, majf=0, minf=116 00:42:08.967 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:08.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:08.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:08.967 issued rwts: total=2022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:08.967 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:08.967 filename0: (groupid=0, jobs=1): err= 0: pid=2548648: Mon Oct 28 05:17:57 2024 00:42:08.967 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(248MiB/10046msec) 00:42:08.967 slat (nsec): min=7062, max=35710, avg=13290.03, stdev=3165.97 00:42:08.967 clat (usec): min=11015, max=57901, avg=15129.25, stdev=3186.92 00:42:08.967 lat (usec): min=11028, max=57914, avg=15142.54, stdev=3186.90 00:42:08.967 clat percentiles (usec): 00:42:08.967 | 1.00th=[12256], 5.00th=[12911], 10.00th=[13435], 20.00th=[13829], 00:42:08.967 | 30.00th=[14222], 40.00th=[14615], 50.00th=[14877], 60.00th=[15270], 00:42:08.967 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16581], 95.00th=[16909], 00:42:08.967 | 99.00th=[17957], 99.50th=[51119], 99.90th=[57410], 99.95th=[57934], 00:42:08.967 | 99.99th=[57934] 00:42:08.967 bw ( KiB/s): min=23040, max=27392, per=32.89%, avg=25408.00, stdev=1200.39, samples=20 00:42:08.967 iops : min= 180, max= 214, avg=198.50, stdev= 9.38, samples=20 00:42:08.967 lat (msec) : 20=99.45%, 50=0.05%, 100=0.50% 00:42:08.967 cpu : usr=90.86%, sys=8.65%, ctx=21, majf=0, minf=110 00:42:08.967 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:08.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:08.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:08.967 issued rwts: total=1987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:08.967 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:08.967 00:42:08.967 Run status group 0 (all jobs): 00:42:08.967 READ: bw=75.5MiB/s (79.1MB/s), 24.7MiB/s-25.6MiB/s (25.9MB/s-26.8MB/s), io=758MiB (795MB), run=10046-10048msec 00:42:08.967 05:17:57 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:08.967 05:17:57 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:08.967 05:17:57 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:08.967 05:17:57 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:08.967 05:17:57 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:08.967 05:17:57 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:08.967 05:17:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:08.967 05:17:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:08.967 05:17:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:08.968 05:17:57 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:08.968 05:17:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:08.968 05:17:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:08.968 05:17:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:08.968 00:42:08.968 real 0m11.396s 00:42:08.968 user 0m28.615s 00:42:08.968 sys 0m2.948s 00:42:08.968 05:17:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:08.968 05:17:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:08.968 ************************************ 00:42:08.968 END TEST fio_dif_digest 00:42:08.968 ************************************ 00:42:08.968 05:17:57 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:08.968 05:17:57 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:08.968 05:17:57 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:08.968 05:17:57 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:42:08.968 05:17:57 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:08.968 05:17:57 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:42:08.968 05:17:57 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:08.968 05:17:57 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:08.968 rmmod nvme_tcp 00:42:08.968 rmmod nvme_fabrics 00:42:08.968 rmmod nvme_keyring 00:42:08.968 05:17:57 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:08.968 05:17:57 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:42:08.968 05:17:57 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:42:08.968 05:17:57 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 2542642 ']' 00:42:08.968 05:17:57 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 2542642 00:42:08.968 05:17:57 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2542642 ']' 00:42:08.968 05:17:57 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2542642 00:42:08.968 05:17:57 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:42:08.968 05:17:57 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:08.968 05:17:57 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2542642 00:42:08.968 05:17:57 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:08.968 05:17:57 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:08.968 05:17:57 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2542642' 00:42:08.968 killing process with pid 2542642 00:42:08.968 05:17:57 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2542642 00:42:08.968 05:17:57 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2542642 00:42:08.968 05:17:58 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:42:08.968 05:17:58 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:08.968 Waiting for block devices as requested 00:42:08.968 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:08.968 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:08.968 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:09.225 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:09.225 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:09.225 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:09.225 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:09.483 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:09.483 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:09.483 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:09.483 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:09.741 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:09.741 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:09.741 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:09.741 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:09.741 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:09.999 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:09.999 05:18:00 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:09.999 05:18:00 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:09.999 05:18:00 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:42:09.999 05:18:00 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:42:09.999 05:18:00 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:09.999 05:18:00 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:42:09.999 05:18:00 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:09.999 05:18:00 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:09.999 05:18:00 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:09.999 05:18:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:09.999 05:18:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:12.529 05:18:02 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:12.529 00:42:12.529 real 1m8.087s 00:42:12.529 user 6m30.568s 00:42:12.529 sys 0m19.864s 00:42:12.529 05:18:02 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:12.529 05:18:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:12.529 ************************************ 00:42:12.529 END TEST nvmf_dif 00:42:12.529 ************************************ 00:42:12.529 05:18:02 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:12.529 05:18:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:12.529 05:18:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:12.529 05:18:02 -- common/autotest_common.sh@10 -- # set +x 00:42:12.529 ************************************ 00:42:12.529 START TEST nvmf_abort_qd_sizes 00:42:12.529 ************************************ 00:42:12.529 05:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:12.529 * Looking for test storage... 00:42:12.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:12.529 05:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:42:12.529 05:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1689 -- # lcov --version 00:42:12.529 05:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:42:12.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.530 --rc genhtml_branch_coverage=1 00:42:12.530 --rc genhtml_function_coverage=1 00:42:12.530 --rc genhtml_legend=1 00:42:12.530 --rc geninfo_all_blocks=1 00:42:12.530 --rc geninfo_unexecuted_blocks=1 00:42:12.530 00:42:12.530 ' 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:42:12.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.530 --rc genhtml_branch_coverage=1 00:42:12.530 --rc genhtml_function_coverage=1 00:42:12.530 --rc genhtml_legend=1 00:42:12.530 --rc geninfo_all_blocks=1 00:42:12.530 --rc geninfo_unexecuted_blocks=1 00:42:12.530 00:42:12.530 ' 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:42:12.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.530 --rc genhtml_branch_coverage=1 00:42:12.530 --rc genhtml_function_coverage=1 00:42:12.530 --rc genhtml_legend=1 00:42:12.530 --rc geninfo_all_blocks=1 00:42:12.530 --rc geninfo_unexecuted_blocks=1 00:42:12.530 00:42:12.530 ' 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:42:12.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.530 --rc genhtml_branch_coverage=1 00:42:12.530 --rc genhtml_function_coverage=1 00:42:12.530 --rc genhtml_legend=1 00:42:12.530 --rc geninfo_all_blocks=1 00:42:12.530 --rc geninfo_unexecuted_blocks=1 00:42:12.530 00:42:12.530 ' 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:12.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:42:12.530 05:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:14.429 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:14.429 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:14.429 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:14.429 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:14.429 05:18:04 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:14.687 05:18:05 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:14.687 05:18:05 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:14.687 05:18:05 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:14.687 05:18:05 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:14.687 05:18:05 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:14.687 05:18:05 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:14.687 05:18:05 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:14.687 05:18:05 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:14.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:14.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:42:14.687 00:42:14.687 --- 10.0.0.2 ping statistics --- 00:42:14.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:14.687 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:42:14.687 05:18:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:14.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:14.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:42:14.687 00:42:14.687 --- 10.0.0.1 ping statistics --- 00:42:14.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:14.687 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:42:14.687 05:18:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:14.687 05:18:05 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:42:14.687 05:18:05 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:42:14.687 05:18:05 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:16.063 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:16.063 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:16.063 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:16.063 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:16.063 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:16.063 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:16.063 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:16.063 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:16.063 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:16.063 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:16.063 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:16.063 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:16.063 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:16.063 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:16.063 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:16.063 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:17.001 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=2553537 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 2553537 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2553537 ']' 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:17.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:17.001 05:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:17.001 [2024-10-28 05:18:07.565046] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:42:17.001 [2024-10-28 05:18:07.565118] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:17.260 [2024-10-28 05:18:07.702272] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:17.260 [2024-10-28 05:18:07.738359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:17.260 [2024-10-28 05:18:07.789225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:17.260 [2024-10-28 05:18:07.789285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:17.260 [2024-10-28 05:18:07.789302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:17.260 [2024-10-28 05:18:07.789316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:17.260 [2024-10-28 05:18:07.789328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:17.260 [2024-10-28 05:18:07.791008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:17.260 [2024-10-28 05:18:07.791073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:17.260 [2024-10-28 05:18:07.791166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:17.260 [2024-10-28 05:18:07.791169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:18.199 05:18:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:18.199 ************************************ 00:42:18.199 START TEST spdk_target_abort 00:42:18.199 ************************************ 00:42:18.199 05:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:42:18.199 05:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:18.199 05:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:42:18.199 05:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:18.199 05:18:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:21.479 spdk_targetn1 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:21.480 [2024-10-28 05:18:11.462896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:21.480 [2024-10-28 05:18:11.511965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:21.480 05:18:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:24.759 Initializing NVMe Controllers 00:42:24.759 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:24.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:24.759 Initialization complete. Launching workers. 00:42:24.759 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10778, failed: 0 00:42:24.759 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1283, failed to submit 9495 00:42:24.759 success 760, unsuccessful 523, failed 0 00:42:24.759 05:18:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:24.759 05:18:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:28.036 Initializing NVMe Controllers 00:42:28.036 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:28.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:28.036 Initialization complete. Launching workers. 00:42:28.036 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8686, failed: 0 00:42:28.036 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1251, failed to submit 7435 00:42:28.036 success 338, unsuccessful 913, failed 0 00:42:28.036 05:18:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:28.036 05:18:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:31.315 Initializing NVMe Controllers 00:42:31.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:31.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:31.315 Initialization complete. Launching workers. 00:42:31.315 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31321, failed: 0 00:42:31.315 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2582, failed to submit 28739 00:42:31.315 success 553, unsuccessful 2029, failed 0 00:42:31.315 05:18:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:31.315 05:18:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:31.315 05:18:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:31.315 05:18:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:31.315 05:18:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:31.315 05:18:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:31.315 05:18:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:32.687 05:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:32.687 05:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2553537 00:42:32.687 05:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2553537 ']' 00:42:32.687 05:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2553537 00:42:32.687 05:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:42:32.687 05:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:32.687 05:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2553537 00:42:32.687 05:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:32.687 05:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:32.687 05:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2553537' 00:42:32.687 killing process with pid 2553537 00:42:32.687 05:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2553537 00:42:32.687 05:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2553537 00:42:32.687 00:42:32.687 real 0m14.657s 00:42:32.687 user 0m57.678s 00:42:32.687 sys 0m2.913s 00:42:32.687 05:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:32.687 05:18:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:32.687 ************************************ 00:42:32.687 END TEST spdk_target_abort 00:42:32.687 ************************************ 00:42:32.945 05:18:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:32.945 05:18:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:32.945 05:18:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:32.945 05:18:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:32.945 ************************************ 00:42:32.945 START TEST kernel_target_abort 00:42:32.945 ************************************ 00:42:32.945 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:42:32.945 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:32.945 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:42:32.945 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:42:32.945 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:42:32.945 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:32.945 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:32.945 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:42:32.945 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:32.945 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:42:32.945 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:42:32.945 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:42:32.945 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:32.945 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:32.945 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:42:32.946 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:32.946 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:32.946 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:32.946 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:42:32.946 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:42:32.946 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:42:32.946 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:32.946 05:18:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:33.880 Waiting for block devices as requested 00:42:33.880 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:34.139 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:34.139 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:34.139 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:34.396 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:34.396 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:34.396 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:34.396 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:34.675 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:34.675 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:34.675 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:34.675 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:34.953 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:34.953 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:34.953 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:34.953 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:35.215 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:35.215 No valid GPT data, bailing 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:35.215 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:42:35.473 00:42:35.473 Discovery Log Number of Records 2, Generation counter 2 00:42:35.473 =====Discovery Log Entry 0====== 00:42:35.473 trtype: tcp 00:42:35.473 adrfam: ipv4 00:42:35.473 subtype: current discovery subsystem 00:42:35.473 treq: not specified, sq flow control disable supported 00:42:35.473 portid: 1 00:42:35.473 trsvcid: 4420 00:42:35.473 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:35.473 traddr: 10.0.0.1 00:42:35.473 eflags: none 00:42:35.473 sectype: none 00:42:35.473 =====Discovery Log Entry 1====== 00:42:35.473 trtype: tcp 00:42:35.473 adrfam: ipv4 00:42:35.473 subtype: nvme subsystem 00:42:35.473 treq: not specified, sq flow control disable supported 00:42:35.473 portid: 1 00:42:35.473 trsvcid: 4420 00:42:35.473 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:35.473 traddr: 10.0.0.1 00:42:35.473 eflags: none 00:42:35.473 sectype: none 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:35.473 05:18:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:38.752 Initializing NVMe Controllers 00:42:38.752 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:38.752 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:38.752 Initialization complete. Launching workers. 00:42:38.752 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39672, failed: 0 00:42:38.752 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39672, failed to submit 0 00:42:38.752 success 0, unsuccessful 39672, failed 0 00:42:38.752 05:18:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:38.752 05:18:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:42.030 Initializing NVMe Controllers 00:42:42.031 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:42.031 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:42.031 Initialization complete. Launching workers. 00:42:42.031 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69362, failed: 0 00:42:42.031 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17498, failed to submit 51864 00:42:42.031 success 0, unsuccessful 17498, failed 0 00:42:42.031 05:18:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:42.031 05:18:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:45.314 Initializing NVMe Controllers 00:42:45.314 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:45.314 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:45.314 Initialization complete. Launching workers. 00:42:45.314 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71384, failed: 0 00:42:45.314 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17842, failed to submit 53542 00:42:45.314 success 0, unsuccessful 17842, failed 0 00:42:45.314 05:18:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:45.314 05:18:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:45.314 05:18:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:42:45.314 05:18:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:45.314 05:18:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:45.314 05:18:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:45.314 05:18:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:45.314 05:18:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:42:45.314 05:18:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:42:45.314 05:18:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:46.251 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:46.251 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:46.251 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:46.251 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:46.251 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:46.251 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:46.251 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:46.251 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:46.251 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:46.251 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:46.251 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:46.251 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:46.251 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:46.251 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:46.251 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:46.251 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:47.188 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:47.446 00:42:47.446 real 0m14.544s 00:42:47.446 user 0m5.729s 00:42:47.446 sys 0m3.282s 00:42:47.446 05:18:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:47.446 05:18:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:47.446 ************************************ 00:42:47.446 END TEST kernel_target_abort 00:42:47.446 ************************************ 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:47.446 rmmod nvme_tcp 00:42:47.446 rmmod nvme_fabrics 00:42:47.446 rmmod nvme_keyring 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 2553537 ']' 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 2553537 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2553537 ']' 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2553537 00:42:47.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2553537) - No such process 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2553537 is not found' 00:42:47.446 Process with pid 2553537 is not found 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:42:47.446 05:18:37 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:48.819 Waiting for block devices as requested 00:42:48.819 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:48.819 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:48.819 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:48.819 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:49.077 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:49.077 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:49.077 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:49.077 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:49.336 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:49.336 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:49.336 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:49.336 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:49.593 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:49.593 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:49.593 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:49.593 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:49.852 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:49.852 05:18:40 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:49.852 05:18:40 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:49.852 05:18:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:49.852 05:18:40 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:42:49.852 05:18:40 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:42:49.852 05:18:40 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:49.852 05:18:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:49.852 05:18:40 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:49.852 05:18:40 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:49.852 05:18:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:49.852 05:18:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:52.383 05:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:52.383 00:42:52.383 real 0m39.779s 00:42:52.383 user 1m5.975s 00:42:52.383 sys 0m9.750s 00:42:52.383 05:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:52.383 05:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:52.383 ************************************ 00:42:52.383 END TEST nvmf_abort_qd_sizes 00:42:52.383 ************************************ 00:42:52.383 05:18:42 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:52.383 05:18:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:52.383 05:18:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:52.383 05:18:42 -- common/autotest_common.sh@10 -- # set +x 00:42:52.383 ************************************ 00:42:52.383 START TEST keyring_file 00:42:52.383 ************************************ 00:42:52.383 05:18:42 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:52.383 * Looking for test storage... 00:42:52.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:52.383 05:18:42 keyring_file -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:42:52.383 05:18:42 keyring_file -- common/autotest_common.sh@1689 -- # lcov --version 00:42:52.383 05:18:42 keyring_file -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:42:52.383 05:18:42 keyring_file -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:52.383 05:18:42 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:52.383 05:18:42 keyring_file -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:52.383 05:18:42 keyring_file -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:42:52.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.383 --rc genhtml_branch_coverage=1 00:42:52.383 --rc genhtml_function_coverage=1 00:42:52.383 --rc genhtml_legend=1 00:42:52.383 --rc geninfo_all_blocks=1 00:42:52.383 --rc geninfo_unexecuted_blocks=1 00:42:52.383 00:42:52.383 ' 00:42:52.383 05:18:42 keyring_file -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:42:52.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.383 --rc genhtml_branch_coverage=1 00:42:52.383 --rc genhtml_function_coverage=1 00:42:52.383 --rc genhtml_legend=1 00:42:52.383 --rc geninfo_all_blocks=1 00:42:52.383 --rc geninfo_unexecuted_blocks=1 00:42:52.383 00:42:52.383 ' 00:42:52.383 05:18:42 keyring_file -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:42:52.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.383 --rc genhtml_branch_coverage=1 00:42:52.384 --rc genhtml_function_coverage=1 00:42:52.384 --rc genhtml_legend=1 00:42:52.384 --rc geninfo_all_blocks=1 00:42:52.384 --rc geninfo_unexecuted_blocks=1 00:42:52.384 00:42:52.384 ' 00:42:52.384 05:18:42 keyring_file -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:42:52.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:52.384 --rc genhtml_branch_coverage=1 00:42:52.384 --rc genhtml_function_coverage=1 00:42:52.384 --rc genhtml_legend=1 00:42:52.384 --rc geninfo_all_blocks=1 00:42:52.384 --rc geninfo_unexecuted_blocks=1 00:42:52.384 00:42:52.384 ' 00:42:52.384 05:18:42 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:52.384 05:18:42 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:52.384 05:18:42 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:52.384 05:18:42 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:52.384 05:18:42 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:52.384 05:18:42 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.384 05:18:42 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.384 05:18:42 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.384 05:18:42 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:52.384 05:18:42 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:52.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:52.384 05:18:42 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:52.384 05:18:42 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:52.384 05:18:42 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:52.384 05:18:42 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:52.384 05:18:42 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:52.384 05:18:42 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uI4lFnp6Ff 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@731 -- # python - 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uI4lFnp6Ff 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uI4lFnp6Ff 00:42:52.384 05:18:42 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.uI4lFnp6Ff 00:42:52.384 05:18:42 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5RKItE6NFQ 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:42:52.384 05:18:42 keyring_file -- nvmf/common.sh@731 -- # python - 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5RKItE6NFQ 00:42:52.384 05:18:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5RKItE6NFQ 00:42:52.384 05:18:42 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.5RKItE6NFQ 00:42:52.384 05:18:42 keyring_file -- keyring/file.sh@30 -- # tgtpid=2559846 00:42:52.384 05:18:42 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:52.384 05:18:42 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2559846 00:42:52.384 05:18:42 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2559846 ']' 00:42:52.384 05:18:42 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:52.384 05:18:42 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:52.384 05:18:42 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:52.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:52.384 05:18:42 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:52.384 05:18:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:52.384 [2024-10-28 05:18:42.745579] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:42:52.384 [2024-10-28 05:18:42.745706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2559846 ] 00:42:52.384 [2024-10-28 05:18:42.876961] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:52.384 [2024-10-28 05:18:42.913189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:52.384 [2024-10-28 05:18:42.963039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:53.318 05:18:43 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:53.318 [2024-10-28 05:18:43.760175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:53.318 null0 00:42:53.318 [2024-10-28 05:18:43.792145] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:53.318 [2024-10-28 05:18:43.792655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:53.318 05:18:43 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:53.318 [2024-10-28 05:18:43.820126] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:53.318 request: 00:42:53.318 { 00:42:53.318 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:53.318 "secure_channel": false, 00:42:53.318 "listen_address": { 00:42:53.318 "trtype": "tcp", 00:42:53.318 "traddr": "127.0.0.1", 00:42:53.318 "trsvcid": "4420" 00:42:53.318 }, 00:42:53.318 "method": "nvmf_subsystem_add_listener", 00:42:53.318 "req_id": 1 00:42:53.318 } 00:42:53.318 Got JSON-RPC error response 00:42:53.318 response: 00:42:53.318 { 00:42:53.318 "code": -32602, 00:42:53.318 "message": "Invalid parameters" 00:42:53.318 } 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:53.318 05:18:43 keyring_file -- keyring/file.sh@47 -- # bperfpid=2559981 00:42:53.318 05:18:43 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:53.318 05:18:43 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2559981 /var/tmp/bperf.sock 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2559981 ']' 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:53.318 05:18:43 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:53.319 05:18:43 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:53.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:53.319 05:18:43 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:53.319 05:18:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:53.319 [2024-10-28 05:18:43.870467] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:42:53.319 [2024-10-28 05:18:43.870544] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2559981 ] 00:42:53.577 [2024-10-28 05:18:44.001375] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:53.577 [2024-10-28 05:18:44.040843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:53.577 [2024-10-28 05:18:44.090174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:54.513 05:18:44 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:54.513 05:18:44 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:54.513 05:18:44 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uI4lFnp6Ff 00:42:54.513 05:18:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uI4lFnp6Ff 00:42:54.770 05:18:45 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5RKItE6NFQ 00:42:54.770 05:18:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5RKItE6NFQ 00:42:55.028 05:18:45 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:55.028 05:18:45 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:55.028 05:18:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:55.028 05:18:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:55.028 05:18:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:55.286 05:18:45 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.uI4lFnp6Ff == \/\t\m\p\/\t\m\p\.\u\I\4\l\F\n\p\6\F\f ]] 00:42:55.286 05:18:45 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:55.286 05:18:45 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:55.286 05:18:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:55.286 05:18:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:55.286 05:18:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:55.543 05:18:45 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.5RKItE6NFQ == \/\t\m\p\/\t\m\p\.\5\R\K\I\t\E\6\N\F\Q ]] 00:42:55.543 05:18:45 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:55.543 05:18:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:55.543 05:18:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:55.544 05:18:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:55.544 05:18:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:55.544 05:18:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:55.801 05:18:46 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:55.801 05:18:46 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:55.801 05:18:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:55.801 05:18:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:55.801 05:18:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:55.801 05:18:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:55.801 05:18:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:56.059 05:18:46 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:56.059 05:18:46 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:56.059 05:18:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:56.316 [2024-10-28 05:18:46.789382] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:56.316 nvme0n1 00:42:56.316 05:18:46 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:56.316 05:18:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:56.316 05:18:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:56.316 05:18:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:56.316 05:18:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:56.316 05:18:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:56.881 05:18:47 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:56.882 05:18:47 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:56.882 05:18:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:56.882 05:18:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:56.882 05:18:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:56.882 05:18:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:56.882 05:18:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:56.882 05:18:47 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:56.882 05:18:47 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:57.139 Running I/O for 1 seconds... 00:42:58.071 5721.00 IOPS, 22.35 MiB/s 00:42:58.071 Latency(us) 00:42:58.071 [2024-10-28T04:18:48.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:58.071 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:58.071 nvme0n1 : 1.02 5730.78 22.39 0.00 0.00 22109.77 5206.87 27056.26 00:42:58.071 [2024-10-28T04:18:48.667Z] =================================================================================================================== 00:42:58.071 [2024-10-28T04:18:48.667Z] Total : 5730.78 22.39 0.00 0.00 22109.77 5206.87 27056.26 00:42:58.071 { 00:42:58.071 "results": [ 00:42:58.071 { 00:42:58.071 "job": "nvme0n1", 00:42:58.071 "core_mask": "0x2", 00:42:58.071 "workload": "randrw", 00:42:58.071 "percentage": 50, 00:42:58.071 "status": "finished", 00:42:58.071 "queue_depth": 128, 00:42:58.071 "io_size": 4096, 00:42:58.071 "runtime": 1.020803, 00:42:58.071 "iops": 5730.782531007452, 00:42:58.071 "mibps": 22.38586926174786, 00:42:58.071 "io_failed": 0, 00:42:58.071 "io_timeout": 0, 00:42:58.071 "avg_latency_us": 22109.7694540989, 00:42:58.071 "min_latency_us": 5206.8698719138665, 00:42:58.071 "max_latency_us": 27056.258399851493 00:42:58.071 } 00:42:58.071 ], 00:42:58.071 "core_count": 1 00:42:58.071 } 00:42:58.071 05:18:48 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:58.071 05:18:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:58.328 05:18:48 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:58.328 05:18:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:58.328 05:18:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:58.328 05:18:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:58.328 05:18:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:58.328 05:18:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:58.586 05:18:49 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:58.586 05:18:49 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:58.586 05:18:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:58.586 05:18:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:58.586 05:18:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:58.586 05:18:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:58.586 05:18:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:59.156 05:18:49 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:59.156 05:18:49 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:59.156 05:18:49 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:59.156 05:18:49 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:59.156 05:18:49 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:59.156 05:18:49 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:59.156 05:18:49 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:59.156 05:18:49 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:59.156 05:18:49 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:59.156 05:18:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:59.156 [2024-10-28 05:18:49.714518] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:59.156 [2024-10-28 05:18:49.715280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222fbb0 (107): Transport endpoint is not connected 00:42:59.156 [2024-10-28 05:18:49.716269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222fbb0 (9): Bad file descriptor 00:42:59.156 [2024-10-28 05:18:49.717265] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:59.156 [2024-10-28 05:18:49.717288] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:59.156 [2024-10-28 05:18:49.717303] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:59.156 [2024-10-28 05:18:49.717320] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:59.156 request: 00:42:59.156 { 00:42:59.156 "name": "nvme0", 00:42:59.156 "trtype": "tcp", 00:42:59.156 "traddr": "127.0.0.1", 00:42:59.156 "adrfam": "ipv4", 00:42:59.156 "trsvcid": "4420", 00:42:59.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:59.156 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:59.156 "prchk_reftag": false, 00:42:59.156 "prchk_guard": false, 00:42:59.156 "hdgst": false, 00:42:59.156 "ddgst": false, 00:42:59.156 "psk": "key1", 00:42:59.156 "allow_unrecognized_csi": false, 00:42:59.156 "method": "bdev_nvme_attach_controller", 00:42:59.157 "req_id": 1 00:42:59.157 } 00:42:59.157 Got JSON-RPC error response 00:42:59.157 response: 00:42:59.157 { 00:42:59.157 "code": -5, 00:42:59.157 "message": "Input/output error" 00:42:59.157 } 00:42:59.157 05:18:49 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:59.157 05:18:49 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:59.157 05:18:49 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:59.157 05:18:49 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:59.157 05:18:49 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:59.157 05:18:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:59.157 05:18:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:59.157 05:18:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:59.157 05:18:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:59.157 05:18:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:59.722 05:18:50 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:59.722 05:18:50 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:59.722 05:18:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:59.722 05:18:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:59.722 05:18:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:59.722 05:18:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:59.722 05:18:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:59.722 05:18:50 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:59.722 05:18:50 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:59.722 05:18:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:00.288 05:18:50 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:43:00.288 05:18:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:43:00.546 05:18:50 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:43:00.546 05:18:50 keyring_file -- keyring/file.sh@78 -- # jq length 00:43:00.546 05:18:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:00.804 05:18:51 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:43:00.804 05:18:51 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.uI4lFnp6Ff 00:43:00.804 05:18:51 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.uI4lFnp6Ff 00:43:00.804 05:18:51 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:43:00.804 05:18:51 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.uI4lFnp6Ff 00:43:00.804 05:18:51 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:00.804 05:18:51 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:00.804 05:18:51 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:00.804 05:18:51 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:00.804 05:18:51 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uI4lFnp6Ff 00:43:00.804 05:18:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uI4lFnp6Ff 00:43:01.062 [2024-10-28 05:18:51.419343] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.uI4lFnp6Ff': 0100660 00:43:01.062 [2024-10-28 05:18:51.419382] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:43:01.062 request: 00:43:01.062 { 00:43:01.062 "name": "key0", 00:43:01.062 "path": "/tmp/tmp.uI4lFnp6Ff", 00:43:01.062 "method": "keyring_file_add_key", 00:43:01.062 "req_id": 1 00:43:01.062 } 00:43:01.062 Got JSON-RPC error response 00:43:01.062 response: 00:43:01.062 { 00:43:01.062 "code": -1, 00:43:01.062 "message": "Operation not permitted" 00:43:01.062 } 00:43:01.062 05:18:51 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:43:01.062 05:18:51 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:01.062 05:18:51 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:01.062 05:18:51 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:01.062 05:18:51 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.uI4lFnp6Ff 00:43:01.062 05:18:51 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uI4lFnp6Ff 00:43:01.062 05:18:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uI4lFnp6Ff 00:43:01.320 05:18:51 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.uI4lFnp6Ff 00:43:01.320 05:18:51 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:43:01.320 05:18:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:01.320 05:18:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:01.320 05:18:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:01.320 05:18:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:01.321 05:18:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:01.580 05:18:52 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:43:01.580 05:18:52 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:01.580 05:18:52 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:43:01.580 05:18:52 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:01.580 05:18:52 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:01.580 05:18:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:01.580 05:18:52 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:01.580 05:18:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:01.580 05:18:52 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:01.580 05:18:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:01.839 [2024-10-28 05:18:52.275574] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.uI4lFnp6Ff': No such file or directory 00:43:01.839 [2024-10-28 05:18:52.275632] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:43:01.839 [2024-10-28 05:18:52.275665] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:43:01.839 [2024-10-28 05:18:52.275693] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:43:01.839 [2024-10-28 05:18:52.275708] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:01.839 [2024-10-28 05:18:52.275720] bdev_nvme.c:6576:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:43:01.839 request: 00:43:01.839 { 00:43:01.839 "name": "nvme0", 00:43:01.839 "trtype": "tcp", 00:43:01.839 "traddr": "127.0.0.1", 00:43:01.839 "adrfam": "ipv4", 00:43:01.839 "trsvcid": "4420", 00:43:01.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:01.839 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:01.839 "prchk_reftag": false, 00:43:01.839 "prchk_guard": false, 00:43:01.839 "hdgst": false, 00:43:01.839 "ddgst": false, 00:43:01.839 "psk": "key0", 00:43:01.839 "allow_unrecognized_csi": false, 00:43:01.839 "method": "bdev_nvme_attach_controller", 00:43:01.839 "req_id": 1 00:43:01.839 } 00:43:01.839 Got JSON-RPC error response 00:43:01.839 response: 00:43:01.839 { 00:43:01.839 "code": -19, 00:43:01.839 "message": "No such device" 00:43:01.839 } 00:43:01.839 05:18:52 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:43:01.839 05:18:52 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:01.839 05:18:52 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:01.839 05:18:52 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:01.839 05:18:52 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:43:01.839 05:18:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:02.098 05:18:52 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:02.098 05:18:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:02.098 05:18:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:02.098 05:18:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:02.098 05:18:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:02.098 05:18:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:02.098 05:18:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wXjKsIvcaj 00:43:02.098 05:18:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:02.098 05:18:52 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:02.098 05:18:52 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:43:02.098 05:18:52 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:43:02.098 05:18:52 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:43:02.098 05:18:52 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:43:02.098 05:18:52 keyring_file -- nvmf/common.sh@731 -- # python - 00:43:02.098 05:18:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wXjKsIvcaj 00:43:02.098 05:18:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wXjKsIvcaj 00:43:02.098 05:18:52 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.wXjKsIvcaj 00:43:02.098 05:18:52 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wXjKsIvcaj 00:43:02.098 05:18:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wXjKsIvcaj 00:43:02.357 05:18:52 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:02.357 05:18:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:02.617 nvme0n1 00:43:02.876 05:18:53 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:43:02.876 05:18:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:02.876 05:18:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:02.876 05:18:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:02.876 05:18:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:02.876 05:18:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:03.134 05:18:53 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:43:03.134 05:18:53 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:43:03.134 05:18:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:03.395 05:18:53 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:43:03.395 05:18:53 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:43:03.395 05:18:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:03.395 05:18:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:03.395 05:18:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:03.653 05:18:54 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:43:03.653 05:18:54 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:43:03.653 05:18:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:03.653 05:18:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:03.653 05:18:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:03.653 05:18:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:03.653 05:18:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:03.910 05:18:54 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:43:03.910 05:18:54 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:03.910 05:18:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:04.168 05:18:54 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:43:04.168 05:18:54 keyring_file -- keyring/file.sh@105 -- # jq length 00:43:04.168 05:18:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:04.426 05:18:54 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:43:04.426 05:18:54 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wXjKsIvcaj 00:43:04.426 05:18:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wXjKsIvcaj 00:43:04.683 05:18:55 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5RKItE6NFQ 00:43:04.683 05:18:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5RKItE6NFQ 00:43:04.942 05:18:55 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:04.942 05:18:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:05.200 nvme0n1 00:43:05.458 05:18:55 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:43:05.458 05:18:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:43:05.716 05:18:56 keyring_file -- keyring/file.sh@113 -- # config='{ 00:43:05.716 "subsystems": [ 00:43:05.716 { 00:43:05.716 "subsystem": "keyring", 00:43:05.716 "config": [ 00:43:05.716 { 00:43:05.716 "method": "keyring_file_add_key", 00:43:05.716 "params": { 00:43:05.716 "name": "key0", 00:43:05.716 "path": "/tmp/tmp.wXjKsIvcaj" 00:43:05.716 } 00:43:05.716 }, 00:43:05.716 { 00:43:05.716 "method": "keyring_file_add_key", 00:43:05.716 "params": { 00:43:05.716 "name": "key1", 00:43:05.716 "path": "/tmp/tmp.5RKItE6NFQ" 00:43:05.716 } 00:43:05.716 } 00:43:05.716 ] 00:43:05.716 }, 00:43:05.716 { 00:43:05.716 "subsystem": "iobuf", 00:43:05.716 "config": [ 00:43:05.716 { 00:43:05.716 "method": "iobuf_set_options", 00:43:05.716 "params": { 00:43:05.716 "small_pool_count": 8192, 00:43:05.716 "large_pool_count": 1024, 00:43:05.716 "small_bufsize": 8192, 00:43:05.716 "large_bufsize": 135168, 00:43:05.716 "enable_numa": false 00:43:05.716 } 00:43:05.716 } 00:43:05.716 ] 00:43:05.716 }, 00:43:05.716 { 00:43:05.716 "subsystem": "sock", 00:43:05.716 "config": [ 00:43:05.716 { 00:43:05.716 "method": "sock_set_default_impl", 00:43:05.716 "params": { 00:43:05.716 "impl_name": "posix" 00:43:05.716 } 00:43:05.716 }, 00:43:05.716 { 00:43:05.716 "method": "sock_impl_set_options", 00:43:05.716 "params": { 00:43:05.716 "impl_name": "ssl", 00:43:05.716 "recv_buf_size": 4096, 00:43:05.716 "send_buf_size": 4096, 00:43:05.716 "enable_recv_pipe": true, 00:43:05.716 "enable_quickack": false, 00:43:05.716 "enable_placement_id": 0, 00:43:05.716 "enable_zerocopy_send_server": true, 00:43:05.716 "enable_zerocopy_send_client": false, 00:43:05.716 "zerocopy_threshold": 0, 00:43:05.716 "tls_version": 0, 00:43:05.716 "enable_ktls": false 00:43:05.716 } 00:43:05.716 }, 00:43:05.716 { 00:43:05.716 "method": "sock_impl_set_options", 00:43:05.716 "params": { 00:43:05.716 "impl_name": "posix", 00:43:05.716 "recv_buf_size": 2097152, 00:43:05.716 "send_buf_size": 2097152, 00:43:05.716 "enable_recv_pipe": true, 00:43:05.716 "enable_quickack": false, 00:43:05.716 "enable_placement_id": 0, 00:43:05.716 "enable_zerocopy_send_server": true, 00:43:05.716 "enable_zerocopy_send_client": false, 00:43:05.716 "zerocopy_threshold": 0, 00:43:05.716 "tls_version": 0, 00:43:05.716 "enable_ktls": false 00:43:05.716 } 00:43:05.716 } 00:43:05.716 ] 00:43:05.716 }, 00:43:05.716 { 00:43:05.716 "subsystem": "vmd", 00:43:05.716 "config": [] 00:43:05.716 }, 00:43:05.716 { 00:43:05.716 "subsystem": "accel", 00:43:05.716 "config": [ 00:43:05.716 { 00:43:05.716 "method": "accel_set_options", 00:43:05.716 "params": { 00:43:05.716 "small_cache_size": 128, 00:43:05.716 "large_cache_size": 16, 00:43:05.716 "task_count": 2048, 00:43:05.716 "sequence_count": 2048, 00:43:05.717 "buf_count": 2048 00:43:05.717 } 00:43:05.717 } 00:43:05.717 ] 00:43:05.717 }, 00:43:05.717 { 00:43:05.717 "subsystem": "bdev", 00:43:05.717 "config": [ 00:43:05.717 { 00:43:05.717 "method": "bdev_set_options", 00:43:05.717 "params": { 00:43:05.717 "bdev_io_pool_size": 65535, 00:43:05.717 "bdev_io_cache_size": 256, 00:43:05.717 "bdev_auto_examine": true, 00:43:05.717 "iobuf_small_cache_size": 128, 00:43:05.717 "iobuf_large_cache_size": 16 00:43:05.717 } 00:43:05.717 }, 00:43:05.717 { 00:43:05.717 "method": "bdev_raid_set_options", 00:43:05.717 "params": { 00:43:05.717 "process_window_size_kb": 1024, 00:43:05.717 "process_max_bandwidth_mb_sec": 0 00:43:05.717 } 00:43:05.717 }, 00:43:05.717 { 00:43:05.717 "method": "bdev_iscsi_set_options", 00:43:05.717 "params": { 00:43:05.717 "timeout_sec": 30 00:43:05.717 } 00:43:05.717 }, 00:43:05.717 { 00:43:05.717 "method": "bdev_nvme_set_options", 00:43:05.717 "params": { 00:43:05.717 "action_on_timeout": "none", 00:43:05.717 "timeout_us": 0, 00:43:05.717 "timeout_admin_us": 0, 00:43:05.717 "keep_alive_timeout_ms": 10000, 00:43:05.717 "arbitration_burst": 0, 00:43:05.717 "low_priority_weight": 0, 00:43:05.717 "medium_priority_weight": 0, 00:43:05.717 "high_priority_weight": 0, 00:43:05.717 "nvme_adminq_poll_period_us": 10000, 00:43:05.717 "nvme_ioq_poll_period_us": 0, 00:43:05.717 "io_queue_requests": 512, 00:43:05.717 "delay_cmd_submit": true, 00:43:05.717 "transport_retry_count": 4, 00:43:05.717 "bdev_retry_count": 3, 00:43:05.717 "transport_ack_timeout": 0, 00:43:05.717 "ctrlr_loss_timeout_sec": 0, 00:43:05.717 "reconnect_delay_sec": 0, 00:43:05.717 "fast_io_fail_timeout_sec": 0, 00:43:05.717 "disable_auto_failback": false, 00:43:05.717 "generate_uuids": false, 00:43:05.717 "transport_tos": 0, 00:43:05.717 "nvme_error_stat": false, 00:43:05.717 "rdma_srq_size": 0, 00:43:05.717 "io_path_stat": false, 00:43:05.717 "allow_accel_sequence": false, 00:43:05.717 "rdma_max_cq_size": 0, 00:43:05.717 "rdma_cm_event_timeout_ms": 0, 00:43:05.717 "dhchap_digests": [ 00:43:05.717 "sha256", 00:43:05.717 "sha384", 00:43:05.717 "sha512" 00:43:05.717 ], 00:43:05.717 "dhchap_dhgroups": [ 00:43:05.717 "null", 00:43:05.717 "ffdhe2048", 00:43:05.717 "ffdhe3072", 00:43:05.717 "ffdhe4096", 00:43:05.717 "ffdhe6144", 00:43:05.717 "ffdhe8192" 00:43:05.717 ] 00:43:05.717 } 00:43:05.717 }, 00:43:05.717 { 00:43:05.717 "method": "bdev_nvme_attach_controller", 00:43:05.717 "params": { 00:43:05.717 "name": "nvme0", 00:43:05.717 "trtype": "TCP", 00:43:05.717 "adrfam": "IPv4", 00:43:05.717 "traddr": "127.0.0.1", 00:43:05.717 "trsvcid": "4420", 00:43:05.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:05.717 "prchk_reftag": false, 00:43:05.717 "prchk_guard": false, 00:43:05.717 "ctrlr_loss_timeout_sec": 0, 00:43:05.717 "reconnect_delay_sec": 0, 00:43:05.717 "fast_io_fail_timeout_sec": 0, 00:43:05.717 "psk": "key0", 00:43:05.717 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:05.717 "hdgst": false, 00:43:05.717 "ddgst": false, 00:43:05.717 "multipath": "multipath" 00:43:05.717 } 00:43:05.717 }, 00:43:05.717 { 00:43:05.717 "method": "bdev_nvme_set_hotplug", 00:43:05.717 "params": { 00:43:05.717 "period_us": 100000, 00:43:05.717 "enable": false 00:43:05.717 } 00:43:05.717 }, 00:43:05.717 { 00:43:05.717 "method": "bdev_wait_for_examine" 00:43:05.717 } 00:43:05.717 ] 00:43:05.717 }, 00:43:05.717 { 00:43:05.717 "subsystem": "nbd", 00:43:05.717 "config": [] 00:43:05.717 } 00:43:05.717 ] 00:43:05.717 }' 00:43:05.717 05:18:56 keyring_file -- keyring/file.sh@115 -- # killprocess 2559981 00:43:05.717 05:18:56 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2559981 ']' 00:43:05.717 05:18:56 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2559981 00:43:05.717 05:18:56 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:05.717 05:18:56 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:05.717 05:18:56 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2559981 00:43:05.717 05:18:56 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:05.717 05:18:56 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:05.717 05:18:56 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2559981' 00:43:05.717 killing process with pid 2559981 00:43:05.717 05:18:56 keyring_file -- common/autotest_common.sh@969 -- # kill 2559981 00:43:05.717 Received shutdown signal, test time was about 1.000000 seconds 00:43:05.717 00:43:05.717 Latency(us) 00:43:05.717 [2024-10-28T04:18:56.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:05.717 [2024-10-28T04:18:56.313Z] =================================================================================================================== 00:43:05.717 [2024-10-28T04:18:56.313Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:05.717 05:18:56 keyring_file -- common/autotest_common.sh@974 -- # wait 2559981 00:43:05.975 05:18:56 keyring_file -- keyring/file.sh@118 -- # bperfpid=2561537 00:43:05.975 05:18:56 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2561537 /var/tmp/bperf.sock 00:43:05.975 05:18:56 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2561537 ']' 00:43:05.975 05:18:56 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:43:05.975 05:18:56 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:05.975 05:18:56 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:05.975 05:18:56 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:43:05.975 "subsystems": [ 00:43:05.975 { 00:43:05.975 "subsystem": "keyring", 00:43:05.975 "config": [ 00:43:05.975 { 00:43:05.975 "method": "keyring_file_add_key", 00:43:05.975 "params": { 00:43:05.975 "name": "key0", 00:43:05.975 "path": "/tmp/tmp.wXjKsIvcaj" 00:43:05.975 } 00:43:05.975 }, 00:43:05.975 { 00:43:05.975 "method": "keyring_file_add_key", 00:43:05.975 "params": { 00:43:05.975 "name": "key1", 00:43:05.975 "path": "/tmp/tmp.5RKItE6NFQ" 00:43:05.975 } 00:43:05.975 } 00:43:05.975 ] 00:43:05.975 }, 00:43:05.976 { 00:43:05.976 "subsystem": "iobuf", 00:43:05.976 "config": [ 00:43:05.976 { 00:43:05.976 "method": "iobuf_set_options", 00:43:05.976 "params": { 00:43:05.976 "small_pool_count": 8192, 00:43:05.976 "large_pool_count": 1024, 00:43:05.976 "small_bufsize": 8192, 00:43:05.976 "large_bufsize": 135168, 00:43:05.976 "enable_numa": false 00:43:05.976 } 00:43:05.976 } 00:43:05.976 ] 00:43:05.976 }, 00:43:05.976 { 00:43:05.976 "subsystem": "sock", 00:43:05.976 "config": [ 00:43:05.976 { 00:43:05.976 "method": "sock_set_default_impl", 00:43:05.976 "params": { 00:43:05.976 "impl_name": "posix" 00:43:05.976 } 00:43:05.976 }, 00:43:05.976 { 00:43:05.976 "method": "sock_impl_set_options", 00:43:05.976 "params": { 00:43:05.976 "impl_name": "ssl", 00:43:05.976 "recv_buf_size": 4096, 00:43:05.976 "send_buf_size": 4096, 00:43:05.976 "enable_recv_pipe": true, 00:43:05.976 "enable_quickack": false, 00:43:05.976 "enable_placement_id": 0, 00:43:05.976 "enable_zerocopy_send_server": true, 00:43:05.976 "enable_zerocopy_send_client": false, 00:43:05.976 "zerocopy_threshold": 0, 00:43:05.976 "tls_version": 0, 00:43:05.976 "enable_ktls": false 00:43:05.976 } 00:43:05.976 }, 00:43:05.976 { 00:43:05.976 "method": "sock_impl_set_options", 00:43:05.976 "params": { 00:43:05.976 "impl_name": "posix", 00:43:05.976 "recv_buf_size": 2097152, 00:43:05.976 "send_buf_size": 2097152, 00:43:05.976 "enable_recv_pipe": true, 00:43:05.976 "enable_quickack": false, 00:43:05.976 "enable_placement_id": 0, 00:43:05.976 "enable_zerocopy_send_server": true, 00:43:05.976 "enable_zerocopy_send_client": false, 00:43:05.976 "zerocopy_threshold": 0, 00:43:05.976 "tls_version": 0, 00:43:05.976 "enable_ktls": false 00:43:05.976 } 00:43:05.976 } 00:43:05.976 ] 00:43:05.976 }, 00:43:05.976 { 00:43:05.976 "subsystem": "vmd", 00:43:05.976 "config": [] 00:43:05.976 }, 00:43:05.976 { 00:43:05.976 "subsystem": "accel", 00:43:05.976 "config": [ 00:43:05.976 { 00:43:05.976 "method": "accel_set_options", 00:43:05.976 "params": { 00:43:05.976 "small_cache_size": 128, 00:43:05.976 "large_cache_size": 16, 00:43:05.976 "task_count": 2048, 00:43:05.976 "sequence_count": 2048, 00:43:05.976 "buf_count": 2048 00:43:05.976 } 00:43:05.976 } 00:43:05.976 ] 00:43:05.976 }, 00:43:05.976 { 00:43:05.976 "subsystem": "bdev", 00:43:05.976 "config": [ 00:43:05.976 { 00:43:05.976 "method": "bdev_set_options", 00:43:05.976 "params": { 00:43:05.976 "bdev_io_pool_size": 65535, 00:43:05.976 "bdev_io_cache_size": 256, 00:43:05.976 "bdev_auto_examine": true, 00:43:05.976 "iobuf_small_cache_size": 128, 00:43:05.976 "iobuf_large_cache_size": 16 00:43:05.976 } 00:43:05.976 }, 00:43:05.976 { 00:43:05.976 "method": "bdev_raid_set_options", 00:43:05.976 "params": { 00:43:05.976 "process_window_size_kb": 1024, 00:43:05.976 "process_max_bandwidth_mb_sec": 0 00:43:05.976 } 00:43:05.976 }, 00:43:05.976 { 00:43:05.976 "method": "bdev_iscsi_set_options", 00:43:05.976 "params": { 00:43:05.976 "timeout_sec": 30 00:43:05.976 } 00:43:05.976 }, 00:43:05.976 { 00:43:05.976 "method": "bdev_nvme_set_options", 00:43:05.976 "params": { 00:43:05.976 "action_on_timeout": "none", 00:43:05.976 "timeout_us": 0, 00:43:05.976 "timeout_admin_us": 0, 00:43:05.976 "keep_alive_timeout_ms": 10000, 00:43:05.976 "arbitration_burst": 0, 00:43:05.976 "low_priority_weight": 0, 00:43:05.976 "medium_priority_weight": 0, 00:43:05.976 "high_priority_weight": 0, 00:43:05.976 "nvme_adminq_poll_period_us": 10000, 00:43:05.976 "nvme_ioq_poll_period_us": 0, 00:43:05.976 "io_queue_requests": 512, 00:43:05.976 "delay_cmd_submit": true, 00:43:05.976 "transport_retry_count": 4, 00:43:05.976 "bdev_retry_count": 3, 00:43:05.976 "transport_ack_timeout": 0, 00:43:05.976 "ctrlr_loss_timeout_sec": 0, 00:43:05.976 "reconnect_delay_sec": 0, 00:43:05.976 "fast_io_fail_timeout_sec": 0, 00:43:05.976 "disable_auto_failback": false, 00:43:05.976 "generate_uuids": false, 00:43:05.976 "transport_tos": 0, 00:43:05.976 "nvme_error_stat": false, 00:43:05.976 "rdma_srq_size": 0, 00:43:05.976 "io_path_stat": false, 00:43:05.976 "allow_accel_sequence": false, 00:43:05.976 "rdma_max_cq_size": 0, 00:43:05.976 "rdma_cm_event_timeout_ms": 0, 00:43:05.976 "dhchap_digests": [ 00:43:05.976 "sha256", 00:43:05.976 "sha384", 00:43:05.976 "sha512" 00:43:05.976 ], 00:43:05.976 "dhchap_dhgroups": [ 00:43:05.976 "null", 00:43:05.976 "ffdhe2048", 00:43:05.976 "ffdhe3072", 00:43:05.976 "ffdhe4096", 00:43:05.976 "ffdhe6144", 00:43:05.976 "ffdhe8192" 00:43:05.976 ] 00:43:05.976 } 00:43:05.976 }, 00:43:05.976 { 00:43:05.976 "method": "bdev_nvme_attach_controller", 00:43:05.976 "params": { 00:43:05.976 "name": "nvme0", 00:43:05.976 "trtype": "TCP", 00:43:05.976 "adrfam": "IPv4", 00:43:05.976 "traddr": "127.0.0.1", 00:43:05.976 "trsvcid": "4420", 00:43:05.976 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:05.976 "prchk_reftag": false, 00:43:05.976 "prchk_guard": false, 00:43:05.976 "ctrlr_loss_timeout_sec": 0, 00:43:05.976 "reconnect_delay_sec": 0, 00:43:05.976 "fast_io_fail_timeout_sec": 0, 00:43:05.976 "psk": "key0", 00:43:05.976 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:05.976 "hdgst": false, 00:43:05.976 "ddgst": false, 00:43:05.976 "multipath": "multipath" 00:43:05.976 } 00:43:05.976 }, 00:43:05.976 { 00:43:05.976 "method": "bdev_nvme_set_hotplug", 00:43:05.976 "params": { 00:43:05.976 "period_us": 100000, 00:43:05.976 "enable": false 00:43:05.976 } 00:43:05.976 }, 00:43:05.976 { 00:43:05.976 "method": "bdev_wait_for_examine" 00:43:05.976 } 00:43:05.976 ] 00:43:05.976 }, 00:43:05.976 { 00:43:05.976 "subsystem": "nbd", 00:43:05.976 "config": [] 00:43:05.976 } 00:43:05.976 ] 00:43:05.976 }' 00:43:05.976 05:18:56 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:05.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:05.976 05:18:56 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:05.976 05:18:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:05.976 [2024-10-28 05:18:56.411064] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:43:05.976 [2024-10-28 05:18:56.411156] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2561537 ] 00:43:05.976 [2024-10-28 05:18:56.543321] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:06.234 [2024-10-28 05:18:56.583357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:06.234 [2024-10-28 05:18:56.633146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:06.234 [2024-10-28 05:18:56.816901] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:07.164 05:18:57 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:07.164 05:18:57 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:43:07.164 05:18:57 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:43:07.164 05:18:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:07.164 05:18:57 keyring_file -- keyring/file.sh@121 -- # jq length 00:43:07.164 05:18:57 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:07.164 05:18:57 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:43:07.164 05:18:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:07.164 05:18:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:07.164 05:18:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:07.164 05:18:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:07.164 05:18:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:07.421 05:18:57 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:43:07.421 05:18:57 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:43:07.421 05:18:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:07.421 05:18:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:07.421 05:18:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:07.421 05:18:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:07.421 05:18:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:07.679 05:18:58 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:43:07.679 05:18:58 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:43:07.679 05:18:58 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:43:07.679 05:18:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:08.244 05:18:58 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:43:08.244 05:18:58 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:08.244 05:18:58 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.wXjKsIvcaj /tmp/tmp.5RKItE6NFQ 00:43:08.244 05:18:58 keyring_file -- keyring/file.sh@20 -- # killprocess 2561537 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2561537 ']' 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2561537 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2561537 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2561537' 00:43:08.244 killing process with pid 2561537 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@969 -- # kill 2561537 00:43:08.244 Received shutdown signal, test time was about 1.000000 seconds 00:43:08.244 00:43:08.244 Latency(us) 00:43:08.244 [2024-10-28T04:18:58.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:08.244 [2024-10-28T04:18:58.840Z] =================================================================================================================== 00:43:08.244 [2024-10-28T04:18:58.840Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@974 -- # wait 2561537 00:43:08.244 05:18:58 keyring_file -- keyring/file.sh@21 -- # killprocess 2559846 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2559846 ']' 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2559846 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2559846 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2559846' 00:43:08.244 killing process with pid 2559846 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@969 -- # kill 2559846 00:43:08.244 05:18:58 keyring_file -- common/autotest_common.sh@974 -- # wait 2559846 00:43:08.811 00:43:08.811 real 0m16.757s 00:43:08.811 user 0m41.165s 00:43:08.811 sys 0m3.390s 00:43:08.811 05:18:59 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:08.811 05:18:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:08.811 ************************************ 00:43:08.812 END TEST keyring_file 00:43:08.812 ************************************ 00:43:08.812 05:18:59 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:43:08.812 05:18:59 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:08.812 05:18:59 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:43:08.812 05:18:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:08.812 05:18:59 -- common/autotest_common.sh@10 -- # set +x 00:43:08.812 ************************************ 00:43:08.812 START TEST keyring_linux 00:43:08.812 ************************************ 00:43:08.812 05:18:59 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:08.812 Joined session keyring: 681891783 00:43:08.812 * Looking for test storage... 00:43:08.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:08.812 05:18:59 keyring_linux -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:43:08.812 05:18:59 keyring_linux -- common/autotest_common.sh@1689 -- # lcov --version 00:43:08.812 05:18:59 keyring_linux -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:43:08.812 05:18:59 keyring_linux -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@345 -- # : 1 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:08.812 05:18:59 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:43:09.097 05:18:59 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:43:09.098 05:18:59 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:09.098 05:18:59 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:09.098 05:18:59 keyring_linux -- scripts/common.sh@368 -- # return 0 00:43:09.098 05:18:59 keyring_linux -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:09.098 05:18:59 keyring_linux -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:43:09.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:09.098 --rc genhtml_branch_coverage=1 00:43:09.098 --rc genhtml_function_coverage=1 00:43:09.098 --rc genhtml_legend=1 00:43:09.098 --rc geninfo_all_blocks=1 00:43:09.098 --rc geninfo_unexecuted_blocks=1 00:43:09.098 00:43:09.098 ' 00:43:09.098 05:18:59 keyring_linux -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:43:09.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:09.098 --rc genhtml_branch_coverage=1 00:43:09.098 --rc genhtml_function_coverage=1 00:43:09.098 --rc genhtml_legend=1 00:43:09.098 --rc geninfo_all_blocks=1 00:43:09.098 --rc geninfo_unexecuted_blocks=1 00:43:09.098 00:43:09.098 ' 00:43:09.098 05:18:59 keyring_linux -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:43:09.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:09.098 --rc genhtml_branch_coverage=1 00:43:09.098 --rc genhtml_function_coverage=1 00:43:09.098 --rc genhtml_legend=1 00:43:09.098 --rc geninfo_all_blocks=1 00:43:09.098 --rc geninfo_unexecuted_blocks=1 00:43:09.098 00:43:09.098 ' 00:43:09.098 05:18:59 keyring_linux -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:43:09.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:09.098 --rc genhtml_branch_coverage=1 00:43:09.098 --rc genhtml_function_coverage=1 00:43:09.098 --rc genhtml_legend=1 00:43:09.098 --rc geninfo_all_blocks=1 00:43:09.098 --rc geninfo_unexecuted_blocks=1 00:43:09.098 00:43:09.098 ' 00:43:09.098 05:18:59 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:09.098 05:18:59 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:09.098 05:18:59 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:43:09.098 05:18:59 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:09.098 05:18:59 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:09.098 05:18:59 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:09.098 05:18:59 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:09.098 05:18:59 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:09.098 05:18:59 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:09.098 05:18:59 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:09.098 05:18:59 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:09.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:09.098 05:18:59 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:09.098 05:18:59 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:09.098 05:18:59 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:09.098 05:18:59 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:09.098 05:18:59 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:09.098 05:18:59 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:09.098 05:18:59 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:09.098 05:18:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:09.098 05:18:59 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:09.098 05:18:59 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:09.098 05:18:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:09.098 05:18:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:09.098 05:18:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:43:09.098 05:18:59 keyring_linux -- nvmf/common.sh@731 -- # python - 00:43:09.098 05:18:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:09.098 05:18:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:09.098 /tmp/:spdk-test:key0 00:43:09.099 05:18:59 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:09.099 05:18:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:09.099 05:18:59 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:09.099 05:18:59 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:09.099 05:18:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:09.099 05:18:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:09.099 05:18:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:09.099 05:18:59 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:09.099 05:18:59 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:43:09.099 05:18:59 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:43:09.099 05:18:59 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:43:09.099 05:18:59 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:43:09.099 05:18:59 keyring_linux -- nvmf/common.sh@731 -- # python - 00:43:09.099 05:18:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:09.099 05:18:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:09.099 /tmp/:spdk-test:key1 00:43:09.099 05:18:59 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2562007 00:43:09.099 05:18:59 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:09.099 05:18:59 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2562007 00:43:09.099 05:18:59 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2562007 ']' 00:43:09.099 05:18:59 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:09.099 05:18:59 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:09.099 05:18:59 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:09.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:09.099 05:18:59 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:09.099 05:18:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:09.099 [2024-10-28 05:18:59.574383] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:43:09.099 [2024-10-28 05:18:59.574466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2562007 ] 00:43:09.399 [2024-10-28 05:18:59.706216] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:09.399 [2024-10-28 05:18:59.742668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:09.399 [2024-10-28 05:18:59.792290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:10.330 05:19:00 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:10.330 05:19:00 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:43:10.330 05:19:00 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:10.330 05:19:00 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:10.330 05:19:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:10.330 [2024-10-28 05:19:00.593864] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:10.330 null0 00:43:10.330 [2024-10-28 05:19:00.625841] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:10.330 [2024-10-28 05:19:00.626328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:10.330 05:19:00 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:10.330 05:19:00 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:10.330 712366415 00:43:10.330 05:19:00 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:10.330 101944746 00:43:10.330 05:19:00 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2562126 00:43:10.330 05:19:00 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:10.330 05:19:00 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2562126 /var/tmp/bperf.sock 00:43:10.330 05:19:00 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2562126 ']' 00:43:10.330 05:19:00 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:10.330 05:19:00 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:10.330 05:19:00 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:10.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:10.330 05:19:00 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:10.330 05:19:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:10.330 [2024-10-28 05:19:00.697143] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.11.0-rc1 initialization... 00:43:10.330 [2024-10-28 05:19:00.697222] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2562126 ] 00:43:10.330 [2024-10-28 05:19:00.828296] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:10.330 [2024-10-28 05:19:00.869121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:10.330 [2024-10-28 05:19:00.919053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:11.265 05:19:01 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:11.265 05:19:01 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:43:11.265 05:19:01 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:11.265 05:19:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:11.523 05:19:01 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:11.523 05:19:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:11.782 05:19:02 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:11.782 05:19:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:12.039 [2024-10-28 05:19:02.579942] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:12.297 nvme0n1 00:43:12.297 05:19:02 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:12.297 05:19:02 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:12.297 05:19:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:12.297 05:19:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:12.297 05:19:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:12.297 05:19:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:12.555 05:19:02 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:12.555 05:19:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:12.555 05:19:02 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:12.555 05:19:02 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:12.555 05:19:02 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:12.555 05:19:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:12.555 05:19:02 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:12.815 05:19:03 keyring_linux -- keyring/linux.sh@25 -- # sn=712366415 00:43:12.815 05:19:03 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:12.815 05:19:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:12.815 05:19:03 keyring_linux -- keyring/linux.sh@26 -- # [[ 712366415 == \7\1\2\3\6\6\4\1\5 ]] 00:43:12.815 05:19:03 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 712366415 00:43:12.815 05:19:03 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:12.815 05:19:03 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:12.815 Running I/O for 1 seconds... 00:43:14.195 6230.00 IOPS, 24.34 MiB/s 00:43:14.195 Latency(us) 00:43:14.195 [2024-10-28T04:19:04.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:14.195 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:14.195 nvme0n1 : 1.02 6261.03 24.46 0.00 0.00 20303.01 8807.88 28418.80 00:43:14.195 [2024-10-28T04:19:04.791Z] =================================================================================================================== 00:43:14.195 [2024-10-28T04:19:04.791Z] Total : 6261.03 24.46 0.00 0.00 20303.01 8807.88 28418.80 00:43:14.195 { 00:43:14.195 "results": [ 00:43:14.195 { 00:43:14.195 "job": "nvme0n1", 00:43:14.195 "core_mask": "0x2", 00:43:14.195 "workload": "randread", 00:43:14.195 "status": "finished", 00:43:14.195 "queue_depth": 128, 00:43:14.195 "io_size": 4096, 00:43:14.195 "runtime": 1.015648, 00:43:14.195 "iops": 6261.027442578531, 00:43:14.195 "mibps": 24.457138447572387, 00:43:14.195 "io_failed": 0, 00:43:14.195 "io_timeout": 0, 00:43:14.195 "avg_latency_us": 20303.01445440123, 00:43:14.195 "min_latency_us": 8807.882680527195, 00:43:14.195 "max_latency_us": 28418.803786894376 00:43:14.195 } 00:43:14.195 ], 00:43:14.195 "core_count": 1 00:43:14.195 } 00:43:14.195 05:19:04 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:14.195 05:19:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:14.195 05:19:04 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:14.195 05:19:04 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:14.195 05:19:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:14.195 05:19:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:14.195 05:19:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:14.195 05:19:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:14.453 05:19:04 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:14.453 05:19:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:14.453 05:19:04 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:14.453 05:19:04 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:14.453 05:19:04 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:43:14.453 05:19:04 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:14.453 05:19:04 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:14.453 05:19:04 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:14.453 05:19:04 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:14.453 05:19:04 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:14.453 05:19:04 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:14.453 05:19:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:14.712 [2024-10-28 05:19:05.194445] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:14.712 [2024-10-28 05:19:05.194850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b3960 (107): Transport endpoint is not connected 00:43:14.712 [2024-10-28 05:19:05.195838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b3960 (9): Bad file descriptor 00:43:14.712 [2024-10-28 05:19:05.196835] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:14.712 [2024-10-28 05:19:05.196854] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:14.712 [2024-10-28 05:19:05.196868] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:14.712 [2024-10-28 05:19:05.196883] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:14.712 request: 00:43:14.712 { 00:43:14.712 "name": "nvme0", 00:43:14.712 "trtype": "tcp", 00:43:14.712 "traddr": "127.0.0.1", 00:43:14.712 "adrfam": "ipv4", 00:43:14.712 "trsvcid": "4420", 00:43:14.712 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:14.712 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:14.712 "prchk_reftag": false, 00:43:14.712 "prchk_guard": false, 00:43:14.712 "hdgst": false, 00:43:14.712 "ddgst": false, 00:43:14.712 "psk": ":spdk-test:key1", 00:43:14.712 "allow_unrecognized_csi": false, 00:43:14.712 "method": "bdev_nvme_attach_controller", 00:43:14.712 "req_id": 1 00:43:14.712 } 00:43:14.712 Got JSON-RPC error response 00:43:14.712 response: 00:43:14.712 { 00:43:14.712 "code": -5, 00:43:14.712 "message": "Input/output error" 00:43:14.712 } 00:43:14.712 05:19:05 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:43:14.712 05:19:05 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:14.712 05:19:05 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:14.712 05:19:05 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:14.712 05:19:05 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:14.712 05:19:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:14.712 05:19:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:14.712 05:19:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:14.712 05:19:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:14.712 05:19:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:14.712 05:19:05 keyring_linux -- keyring/linux.sh@33 -- # sn=712366415 00:43:14.712 05:19:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 712366415 00:43:14.712 1 links removed 00:43:14.712 05:19:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:14.712 05:19:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:14.712 05:19:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:14.712 05:19:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:14.712 05:19:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:14.712 05:19:05 keyring_linux -- keyring/linux.sh@33 -- # sn=101944746 00:43:14.712 05:19:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 101944746 00:43:14.712 1 links removed 00:43:14.712 05:19:05 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2562126 00:43:14.712 05:19:05 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2562126 ']' 00:43:14.712 05:19:05 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2562126 00:43:14.712 05:19:05 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:43:14.712 05:19:05 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:14.712 05:19:05 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2562126 00:43:14.712 05:19:05 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:14.712 05:19:05 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:14.712 05:19:05 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2562126' 00:43:14.712 killing process with pid 2562126 00:43:14.712 05:19:05 keyring_linux -- common/autotest_common.sh@969 -- # kill 2562126 00:43:14.712 Received shutdown signal, test time was about 1.000000 seconds 00:43:14.712 00:43:14.712 Latency(us) 00:43:14.712 [2024-10-28T04:19:05.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:14.712 [2024-10-28T04:19:05.308Z] =================================================================================================================== 00:43:14.712 [2024-10-28T04:19:05.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:14.712 05:19:05 keyring_linux -- common/autotest_common.sh@974 -- # wait 2562126 00:43:14.971 05:19:05 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2562007 00:43:14.971 05:19:05 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2562007 ']' 00:43:14.971 05:19:05 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2562007 00:43:14.971 05:19:05 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:43:14.971 05:19:05 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:14.971 05:19:05 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2562007 00:43:14.971 05:19:05 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:14.971 05:19:05 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:14.971 05:19:05 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2562007' 00:43:14.971 killing process with pid 2562007 00:43:14.971 05:19:05 keyring_linux -- common/autotest_common.sh@969 -- # kill 2562007 00:43:14.971 05:19:05 keyring_linux -- common/autotest_common.sh@974 -- # wait 2562007 00:43:15.540 00:43:15.540 real 0m6.633s 00:43:15.540 user 0m12.486s 00:43:15.540 sys 0m1.713s 00:43:15.540 05:19:05 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:15.540 05:19:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:15.540 ************************************ 00:43:15.540 END TEST keyring_linux 00:43:15.540 ************************************ 00:43:15.540 05:19:05 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:43:15.540 05:19:05 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:15.540 05:19:05 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:15.541 05:19:05 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:43:15.541 05:19:05 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:43:15.541 05:19:05 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:43:15.541 05:19:05 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:15.541 05:19:05 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:43:15.541 05:19:05 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:15.541 05:19:05 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:43:15.541 05:19:05 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:15.541 05:19:05 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:43:15.541 05:19:05 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:15.541 05:19:05 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:15.541 05:19:05 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:43:15.541 05:19:05 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:43:15.541 05:19:05 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:43:15.541 05:19:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:15.541 05:19:05 -- common/autotest_common.sh@10 -- # set +x 00:43:15.541 05:19:05 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:43:15.541 05:19:05 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:43:15.541 05:19:05 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:43:15.541 05:19:05 -- common/autotest_common.sh@10 -- # set +x 00:43:17.443 INFO: APP EXITING 00:43:17.443 INFO: killing all VMs 00:43:17.444 INFO: killing vhost app 00:43:17.444 INFO: EXIT DONE 00:43:18.381 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:43:18.381 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:43:18.381 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:43:18.381 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:43:18.381 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:43:18.381 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:43:18.381 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:43:18.381 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:43:18.638 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:43:18.639 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:43:18.639 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:43:18.639 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:43:18.639 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:43:18.639 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:43:18.639 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:43:18.639 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:43:18.639 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:43:20.023 Cleaning 00:43:20.023 Removing: /var/run/dpdk/spdk0/config 00:43:20.023 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:20.023 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:20.023 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:20.023 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:20.023 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:20.023 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:20.023 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:20.023 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:20.023 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:20.023 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:20.023 Removing: /var/run/dpdk/spdk1/config 00:43:20.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:20.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:20.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:20.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:20.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:20.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:20.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:20.023 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:20.023 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:20.023 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:20.023 Removing: /var/run/dpdk/spdk2/config 00:43:20.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:20.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:20.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:20.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:20.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:20.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:20.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:20.023 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:20.023 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:20.023 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:20.023 Removing: /var/run/dpdk/spdk3/config 00:43:20.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:20.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:20.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:20.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:20.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:20.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:20.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:20.023 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:20.023 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:20.023 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:20.023 Removing: /var/run/dpdk/spdk4/config 00:43:20.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:20.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:20.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:20.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:20.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:20.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:20.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:20.023 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:20.023 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:20.023 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:20.023 Removing: /dev/shm/bdev_svc_trace.1 00:43:20.023 Removing: /dev/shm/nvmf_trace.0 00:43:20.023 Removing: /dev/shm/spdk_tgt_trace.pid2175481 00:43:20.023 Removing: /var/run/dpdk/spdk0 00:43:20.023 Removing: /var/run/dpdk/spdk1 00:43:20.023 Removing: /var/run/dpdk/spdk2 00:43:20.023 Removing: /var/run/dpdk/spdk3 00:43:20.023 Removing: /var/run/dpdk/spdk4 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2173700 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2174552 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2175481 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2175932 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2176610 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2176751 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2177454 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2177587 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2177847 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2179143 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2180168 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2180478 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2180683 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2181020 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2181342 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2181500 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2181649 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2181839 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2182216 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2184707 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2184878 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2185164 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2185299 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2185602 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2185735 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2186150 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2186282 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2186450 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2186585 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2186747 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2186882 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2187372 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2187521 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2187726 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2190199 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2193323 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2200324 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2200725 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2203232 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2203507 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2206248 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2210020 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2212194 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2218798 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2224101 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2225386 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2226031 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2237042 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2239441 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2294224 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2297499 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2301525 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2306116 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2306118 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2306754 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2307279 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2307923 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2308308 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2308376 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2308570 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2308700 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2308704 00:43:20.023 Removing: /var/run/dpdk/spdk_pid2309350 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2309987 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2310517 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2310973 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2311027 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2311245 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2312323 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2313131 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2318850 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2347161 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2350217 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2351289 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2352578 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2352766 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2352918 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2353115 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2353678 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2354963 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2356019 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2356490 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2358196 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2358621 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2359165 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2361780 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2365765 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2365766 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2365767 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2367974 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2370155 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2373515 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2396400 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2399237 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2403079 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2404008 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2405205 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2406276 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2409190 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2411662 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2415999 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2416090 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2418990 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2419128 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2419291 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2419639 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2419647 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2420819 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2421965 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2423232 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2424396 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2426147 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2427302 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2431187 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2431510 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2432895 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2433619 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2437411 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2439344 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2442842 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2446253 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2453472 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2457892 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2457895 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2470606 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2471127 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2471652 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2472043 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2472746 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2473263 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2473782 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2474246 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2476875 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2477062 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2480793 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2480975 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2484413 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2487004 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2494290 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2494732 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2497283 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2497551 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2500149 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2503855 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2505903 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2512441 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2517741 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2519031 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2519732 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2530408 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2532761 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2534677 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2539810 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2539816 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2542690 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2544067 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2545473 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2546279 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2547651 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2548510 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2554269 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2554839 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2555223 00:43:20.282 Removing: /var/run/dpdk/spdk_pid2556767 00:43:20.541 Removing: /var/run/dpdk/spdk_pid2557159 00:43:20.541 Removing: /var/run/dpdk/spdk_pid2557432 00:43:20.541 Removing: /var/run/dpdk/spdk_pid2559846 00:43:20.541 Removing: /var/run/dpdk/spdk_pid2559981 00:43:20.541 Removing: /var/run/dpdk/spdk_pid2561537 00:43:20.541 Removing: /var/run/dpdk/spdk_pid2562007 00:43:20.541 Removing: /var/run/dpdk/spdk_pid2562126 00:43:20.541 Clean 00:43:20.541 05:19:10 -- common/autotest_common.sh@1449 -- # return 0 00:43:20.541 05:19:10 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:43:20.541 05:19:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:20.541 05:19:10 -- common/autotest_common.sh@10 -- # set +x 00:43:20.541 05:19:10 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:43:20.541 05:19:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:20.541 05:19:10 -- common/autotest_common.sh@10 -- # set +x 00:43:20.541 05:19:11 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:20.541 05:19:11 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:20.541 05:19:11 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:20.541 05:19:11 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:43:20.541 05:19:11 -- spdk/autotest.sh@394 -- # hostname 00:43:20.541 05:19:11 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:20.799 geninfo: WARNING: invalid characters removed from testname! 00:43:52.859 05:19:41 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:55.390 05:19:45 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:59.585 05:19:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:02.111 05:19:52 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:05.445 05:19:55 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:07.973 05:19:58 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:11.257 05:20:01 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:11.257 05:20:01 -- common/autotest_common.sh@1688 -- $ [[ y == y ]] 00:44:11.257 05:20:01 -- common/autotest_common.sh@1689 -- $ lcov --version 00:44:11.257 05:20:01 -- common/autotest_common.sh@1689 -- $ awk '{print $NF}' 00:44:11.257 05:20:01 -- common/autotest_common.sh@1689 -- $ lt 1.15 2 00:44:11.257 05:20:01 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:44:11.257 05:20:01 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:44:11.257 05:20:01 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:44:11.257 05:20:01 -- scripts/common.sh@336 -- $ IFS=.-: 00:44:11.257 05:20:01 -- scripts/common.sh@336 -- $ read -ra ver1 00:44:11.257 05:20:01 -- scripts/common.sh@337 -- $ IFS=.-: 00:44:11.257 05:20:01 -- scripts/common.sh@337 -- $ read -ra ver2 00:44:11.257 05:20:01 -- scripts/common.sh@338 -- $ local 'op=<' 00:44:11.257 05:20:01 -- scripts/common.sh@340 -- $ ver1_l=2 00:44:11.257 05:20:01 -- scripts/common.sh@341 -- $ ver2_l=1 00:44:11.257 05:20:01 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:44:11.257 05:20:01 -- scripts/common.sh@344 -- $ case "$op" in 00:44:11.257 05:20:01 -- scripts/common.sh@345 -- $ : 1 00:44:11.257 05:20:01 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:44:11.257 05:20:01 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:11.257 05:20:01 -- scripts/common.sh@365 -- $ decimal 1 00:44:11.257 05:20:01 -- scripts/common.sh@353 -- $ local d=1 00:44:11.257 05:20:01 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:44:11.257 05:20:01 -- scripts/common.sh@355 -- $ echo 1 00:44:11.257 05:20:01 -- scripts/common.sh@365 -- $ ver1[v]=1 00:44:11.257 05:20:01 -- scripts/common.sh@366 -- $ decimal 2 00:44:11.257 05:20:01 -- scripts/common.sh@353 -- $ local d=2 00:44:11.257 05:20:01 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:44:11.257 05:20:01 -- scripts/common.sh@355 -- $ echo 2 00:44:11.257 05:20:01 -- scripts/common.sh@366 -- $ ver2[v]=2 00:44:11.257 05:20:01 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:44:11.257 05:20:01 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:44:11.257 05:20:01 -- scripts/common.sh@368 -- $ return 0 00:44:11.257 05:20:01 -- common/autotest_common.sh@1690 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:11.257 05:20:01 -- common/autotest_common.sh@1702 -- $ export 'LCOV_OPTS= 00:44:11.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:11.257 --rc genhtml_branch_coverage=1 00:44:11.257 --rc genhtml_function_coverage=1 00:44:11.257 --rc genhtml_legend=1 00:44:11.257 --rc geninfo_all_blocks=1 00:44:11.257 --rc geninfo_unexecuted_blocks=1 00:44:11.257 00:44:11.257 ' 00:44:11.257 05:20:01 -- common/autotest_common.sh@1702 -- $ LCOV_OPTS=' 00:44:11.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:11.257 --rc genhtml_branch_coverage=1 00:44:11.257 --rc genhtml_function_coverage=1 00:44:11.257 --rc genhtml_legend=1 00:44:11.257 --rc geninfo_all_blocks=1 00:44:11.257 --rc geninfo_unexecuted_blocks=1 00:44:11.257 00:44:11.257 ' 00:44:11.257 05:20:01 -- common/autotest_common.sh@1703 -- $ export 'LCOV=lcov 00:44:11.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:11.257 --rc genhtml_branch_coverage=1 00:44:11.257 --rc genhtml_function_coverage=1 00:44:11.257 --rc genhtml_legend=1 00:44:11.257 --rc geninfo_all_blocks=1 00:44:11.257 --rc geninfo_unexecuted_blocks=1 00:44:11.257 00:44:11.257 ' 00:44:11.257 05:20:01 -- common/autotest_common.sh@1703 -- $ LCOV='lcov 00:44:11.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:11.257 --rc genhtml_branch_coverage=1 00:44:11.257 --rc genhtml_function_coverage=1 00:44:11.257 --rc genhtml_legend=1 00:44:11.257 --rc geninfo_all_blocks=1 00:44:11.257 --rc geninfo_unexecuted_blocks=1 00:44:11.257 00:44:11.257 ' 00:44:11.257 05:20:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:11.257 05:20:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:44:11.257 05:20:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:44:11.257 05:20:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:11.257 05:20:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:11.257 05:20:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.257 05:20:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.257 05:20:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.257 05:20:01 -- paths/export.sh@5 -- $ export PATH 00:44:11.257 05:20:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.257 05:20:01 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:44:11.257 05:20:01 -- common/autobuild_common.sh@486 -- $ date +%s 00:44:11.257 05:20:01 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730089201.XXXXXX 00:44:11.257 05:20:01 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730089201.M8sObk 00:44:11.257 05:20:01 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:44:11.257 05:20:01 -- common/autobuild_common.sh@492 -- $ '[' -n main ']' 00:44:11.257 05:20:01 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:44:11.257 05:20:01 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:44:11.257 05:20:01 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:44:11.257 05:20:01 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:44:11.257 05:20:01 -- common/autobuild_common.sh@502 -- $ get_config_params 00:44:11.257 05:20:01 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:44:11.257 05:20:01 -- common/autotest_common.sh@10 -- $ set +x 00:44:11.257 05:20:01 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:44:11.257 05:20:01 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:44:11.257 05:20:01 -- pm/common@17 -- $ local monitor 00:44:11.257 05:20:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:11.257 05:20:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:11.257 05:20:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:11.257 05:20:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:11.257 05:20:01 -- pm/common@21 -- $ date +%s 00:44:11.257 05:20:01 -- pm/common@21 -- $ date +%s 00:44:11.257 05:20:01 -- pm/common@25 -- $ sleep 1 00:44:11.257 05:20:01 -- pm/common@21 -- $ date +%s 00:44:11.257 05:20:01 -- pm/common@21 -- $ date +%s 00:44:11.257 05:20:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730089201 00:44:11.257 05:20:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730089201 00:44:11.257 05:20:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730089201 00:44:11.257 05:20:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730089201 00:44:11.257 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730089201_collect-vmstat.pm.log 00:44:11.257 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730089201_collect-cpu-load.pm.log 00:44:11.257 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730089201_collect-cpu-temp.pm.log 00:44:11.257 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730089201_collect-bmc-pm.bmc.pm.log 00:44:12.198 05:20:02 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:44:12.198 05:20:02 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:44:12.198 05:20:02 -- spdk/autopackage.sh@14 -- $ timing_finish 00:44:12.198 05:20:02 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:12.198 05:20:02 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:12.198 05:20:02 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:12.198 05:20:02 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:44:12.198 05:20:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:44:12.198 05:20:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:44:12.198 05:20:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:12.198 05:20:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:44:12.198 05:20:02 -- pm/common@44 -- $ pid=2569362 00:44:12.198 05:20:02 -- pm/common@50 -- $ kill -TERM 2569362 00:44:12.198 05:20:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:12.198 05:20:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:44:12.198 05:20:02 -- pm/common@44 -- $ pid=2569364 00:44:12.198 05:20:02 -- pm/common@50 -- $ kill -TERM 2569364 00:44:12.198 05:20:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:12.198 05:20:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:44:12.198 05:20:02 -- pm/common@44 -- $ pid=2569366 00:44:12.198 05:20:02 -- pm/common@50 -- $ kill -TERM 2569366 00:44:12.198 05:20:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:12.198 05:20:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:44:12.198 05:20:02 -- pm/common@44 -- $ pid=2569394 00:44:12.198 05:20:02 -- pm/common@50 -- $ sudo -E kill -TERM 2569394 00:44:12.198 + [[ -n 2086271 ]] 00:44:12.198 + sudo kill 2086271 00:44:12.207 [Pipeline] } 00:44:12.222 [Pipeline] // stage 00:44:12.227 [Pipeline] } 00:44:12.240 [Pipeline] // timeout 00:44:12.245 [Pipeline] } 00:44:12.258 [Pipeline] // catchError 00:44:12.263 [Pipeline] } 00:44:12.278 [Pipeline] // wrap 00:44:12.284 [Pipeline] } 00:44:12.296 [Pipeline] // catchError 00:44:12.304 [Pipeline] stage 00:44:12.306 [Pipeline] { (Epilogue) 00:44:12.317 [Pipeline] catchError 00:44:12.319 [Pipeline] { 00:44:12.330 [Pipeline] echo 00:44:12.332 Cleanup processes 00:44:12.337 [Pipeline] sh 00:44:12.614 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:12.614 2569559 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:44:12.614 2569676 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:12.626 [Pipeline] sh 00:44:12.906 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:12.906 ++ awk '{print $1}' 00:44:12.906 ++ grep -v 'sudo pgrep' 00:44:12.906 + sudo kill -9 2569559 00:44:12.917 [Pipeline] sh 00:44:13.197 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:25.410 [Pipeline] sh 00:44:25.688 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:25.688 Artifacts sizes are good 00:44:25.703 [Pipeline] archiveArtifacts 00:44:25.710 Archiving artifacts 00:44:25.856 [Pipeline] sh 00:44:26.140 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:26.154 [Pipeline] cleanWs 00:44:26.163 [WS-CLEANUP] Deleting project workspace... 00:44:26.163 [WS-CLEANUP] Deferred wipeout is used... 00:44:26.169 [WS-CLEANUP] done 00:44:26.170 [Pipeline] } 00:44:26.186 [Pipeline] // catchError 00:44:26.197 [Pipeline] sh 00:44:26.477 + logger -p user.info -t JENKINS-CI 00:44:26.487 [Pipeline] } 00:44:26.500 [Pipeline] // stage 00:44:26.504 [Pipeline] } 00:44:26.518 [Pipeline] // node 00:44:26.522 [Pipeline] End of Pipeline 00:44:26.560 Finished: SUCCESS